This is a cell for your own recording. ここに経緯を記述
Review the structure definition - hosts.csv
:
target_group = 'hadoop_all_cluster1'
target_hostname = 'sn02031601'
%run scripts/loader.py
TARGET_CLUSTER = 'Cluster1'
header, machines = read_machines("hosts.csv")
machines = [m for m in machines if m['Cluster'] == TARGET_CLUSTER]
print("Cluster(%s):" % TARGET_CLUSTER)
pd.DataFrame([get_row(header, m) for m in machines], columns=header)
Cluster(Cluster1):
Cluster | Type | Name | Internal IP | Service IP | VCPUs | Memory(MiB) | DFS Volumes | YARN VCPUs | YARN Total Memory(MB) | ... | HBase RegionServer | Tez | Hive | Pig | Client | Spark | Spark HistoryServer | KDC Master | KDC Slave | Docker | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | Cluster1 | cn0107 | cn01070401 | XXX.XXX.XXX.85 | XXX.XXX.XXX.197 | 12 | 98304 | NaN | NaN | NaN | ... | False | False | False | False | False | False | False | True | False | False |
1 | Cluster1 | cn0107 | cn01070402 | XXX.XXX.XXX.86 | XXX.XXX.XXX.198 | 12 | 98304 | NaN | NaN | NaN | ... | False | False | False | False | False | False | False | False | True | False |
2 | Cluster1 | cn0107 | cn01070403 | XXX.XXX.XXX.87 | XXX.XXX.XXX.199 | 12 | 98304 | NaN | NaN | NaN | ... | False | False | False | False | False | False | False | False | False | False |
3 | Cluster1 | cn0107 | cn01070404 | XXX.XXX.XXX.88 | XXX.XXX.XXX.200 | 12 | 98304 | NaN | NaN | NaN | ... | False | True | True | True | True | True | True | False | False | False |
4 | Cluster1 | cn0107 | cn01070603 | XXX.XXX.XXX.83 | XXX.XXX.XXX.195 | 12 | 98304 | NaN | NaN | NaN | ... | False | False | False | False | False | False | False | False | False | True |
5 | Cluster1 | cn0107 | cn01070604 | XXX.XXX.XXX.84 | XXX.XXX.XXX.196 | 12 | 98304 | NaN | NaN | NaN | ... | False | False | False | False | False | False | False | False | False | True |
6 | Cluster1 | sn0202 | sn02020401 | XXX.XXX.XXX.12 | XXX.XXX.XXX.230 | 16 | 65536 | 10.0 | 15.0 | 64512.0 | ... | True | False | False | False | False | False | False | False | False | False |
7 | Cluster1 | sn0202 | sn02021201 | XXX.XXX.XXX.8 | XXX.XXX.XXX.228 | 16 | 65536 | 10.0 | 15.0 | 64512.0 | ... | True | False | False | False | False | False | False | False | False | False |
8 | Cluster1 | sn0202 | sn02022001 | XXX.XXX.XXX.4 | XXX.XXX.XXX.226 | 16 | 65536 | 10.0 | 15.0 | 64512.0 | ... | True | False | False | False | False | False | False | False | False | False |
9 | Cluster1 | sn0202 | sn02022401 | XXX.XXX.XXX.2 | XXX.XXX.XXX.225 | 16 | 65536 | 10.0 | 15.0 | 64512.0 | ... | True | False | False | False | False | False | False | False | False | False |
10 | Cluster1 | sn0203 | sn02030401 | XXX.XXX.XXX.24 | XXX.XXX.XXX.236 | 16 | 65536 | 10.0 | 15.0 | 64512.0 | ... | True | False | False | False | False | False | False | False | False | False |
11 | Cluster1 | sn0203 | sn02031201 | XXX.XXX.XXX.20 | XXX.XXX.XXX.234 | 16 | 65536 | 10.0 | 15.0 | 64512.0 | ... | True | False | False | False | False | False | False | False | False | False |
12 | Cluster1 | sn0203 | sn02032001 | XXX.XXX.XXX.16 | XXX.XXX.XXX.232 | 16 | 65536 | 10.0 | 15.0 | 64512.0 | ... | True | False | False | False | False | False | False | False | False | False |
13 | Cluster1 | sn0203 | sn02032401 | XXX.XXX.XXX.14 | XXX.XXX.XXX.231 | 16 | 65536 | 10.0 | 15.0 | 64512.0 | ... | True | False | False | False | False | False | False | False | False | False |
14 | Cluster1 | sn0203 | sn02031601 | XXX.XXX.XXX.18 | XXX.XXX.XXX.233 | 16 | 65536 | 10.0 | 15.0 | 64512.0 | ... | True | False | False | False | False | False | False | False | False | False |
15 rows × 30 columns
target_hosts = filter(lambda m: m['Name'] == target_hostname, machines)
assert(target_hosts)
For following sections remember new server's Service NIC address
host_new_machine = target_hosts[0]['Service IP']
host_new_machine
'XXX.XXX.XXX.233'
import tempfile
prereq_path = tempfile.mkdtemp()
!mkdir -p {prereq_path}/original
!git clone ssh://xxx.nii.ac.jp/xxx/aic-dataanalysis-prerequisite.git {prereq_path}/original
!ls -la {prereq_path}
Cloning into '/tmp/tmpwDTh_U/original'... remote: Counting objects: 399, done. remote: Compressing objects: 100% (251/251), done. remote: Total 399 (delta 138), reused 196 (delta 55) Receiving objects: 100% (399/399), 37.95 KiB | 0 bytes/s, done. Resolving deltas: 100% (138/138), done. Checking connectivity... done. total 12 drwx------ 3 root root 4096 Sep 2 18:56 . drwxrwxrwt 65 root root 4096 Sep 2 18:56 .. drwxr-xr-x 7 root root 4096 Sep 2 18:56 original
!mkdir -p {prereq_path}/current
%run common/inventory-base.py
import os
with open(os.path.join(prereq_path, 'current', 'hosts'), 'w') as f:
write_base_inventory(target_hosts, f)
!cat {prereq_path}/current/hosts
[sn02031601] XXX.XXX.XXX.233 [sn0203:children] sn02031601 [Cluster1:children] sn02031601 [ganglia_masters:children] [kerberos_servers:children] kdc_masters kdc_slaves [kdc_masters:children] [kdc_masters:vars] kdc_role=master [kdc_slaves:children] [kdc_slaves:vars] kdc_role=slave [kerberos_clients:children]
import yaml
with open(os.path.join(prereq_path, 'original', 'group_vars', target_hosts[0]['Type']), 'r') as f:
group_vars = yaml.load(f)
group_vars
{'bonding_nic_1': '', 'bonding_nic_2': '', 'hdd_devices_and_mountpoints': [{'device': '/dev/sdc', 'mount': '/hadoop/tmp'}, {'device': '/dev/sdd', 'mount': '/hadoop/data01'}, {'device': '/dev/sde', 'mount': '/hadoop/data02'}, {'device': '/dev/sdf', 'mount': '/hadoop/data03'}, {'device': '/dev/sdg', 'mount': '/hadoop/data04'}, {'device': '/dev/sdh', 'mount': '/hadoop/data05'}, {'device': '/dev/sdi', 'mount': '/hadoop/data06'}, {'device': '/dev/sdj', 'mount': '/hadoop/data07'}, {'device': '/dev/sdk', 'mount': '/hadoop/data08'}, {'device': '/dev/sdl', 'mount': '/hadoop/data09'}, {'device': '/dev/sdm', 'mount': '/hadoop/data10'}], 'log_device': '/dev/sdb', 'server_nic_type': 'sn0203', 'server_type': 'sn'}
Test the inventory...
!ansible -m ping -i {prereq_path}/current/hosts all
XXX.XXX.XXX.233 | SUCCESS => {
"changed": false,
"ping": "pong"
}
!ansible -b -a '/opt/MegaRAID/MegaCli/MegaCli64 -PDList -a0' -i {prereq_path}/current/hosts all
XXX.XXX.XXX.233 | SUCCESS | rc=0 >>
Adapter #0
Enclosure Device ID: 32
Slot Number: 0
Drive's position: DiskGroup: 1, Span: 0, Arm: 0
Enclosure position: 1
Device Id: 0
WWN: 50014EE55556CA41
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
Raw Size: 2.728 TB [0x15d50a3b0 Sectors]
Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors]
Coerced Size: 2.728 TB [0x15d400000 Sectors]
Sector Size: 0
Firmware state: Online, Spun Up
Device Firmware Level: D1R2
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x50014ee55556ca42
SAS Address(1): 0x0
Connected Port Number: 0(path0)
Inquiry Data: WD WD3001FYYG D1R2WCC1F0082655
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Hard Disk Device
Drive Temperature :30C (86.00 F)
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Port-1 :
Port status: Active
Port's Linkspeed: Unknown
Drive has flagged a S.M.A.R.T alert : No
Enclosure Device ID: 32
Slot Number: 1
Drive's position: DiskGroup: 2, Span: 0, Arm: 0
Enclosure position: 1
Device Id: 1
WWN: 50014EE5AAAC5E2D
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
Raw Size: 2.728 TB [0x15d50a3b0 Sectors]
Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors]
Coerced Size: 2.728 TB [0x15d400000 Sectors]
Sector Size: 0
Firmware state: Online, Spun Up
Device Firmware Level: D1R2
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x50014ee5aaac5e2e
SAS Address(1): 0x0
Connected Port Number: 0(path0)
Inquiry Data: WD WD3001FYYG D1R2WCC1F0068736
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Hard Disk Device
Drive Temperature :31C (87.80 F)
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Port-1 :
Port status: Active
Port's Linkspeed: Unknown
Drive has flagged a S.M.A.R.T alert : No
Enclosure Device ID: 32
Slot Number: 2
Drive's position: DiskGroup: 3, Span: 0, Arm: 0
Enclosure position: 1
Device Id: 2
WWN: 50014EE500020F85
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
Raw Size: 2.728 TB [0x15d50a3b0 Sectors]
Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors]
Coerced Size: 2.728 TB [0x15d400000 Sectors]
Sector Size: 0
Firmware state: Online, Spun Up
Device Firmware Level: D1R2
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x50014ee500020f86
SAS Address(1): 0x0
Connected Port Number: 0(path0)
Inquiry Data: WD WD3001FYYG D1R2WCC1F0110913
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Hard Disk Device
Drive Temperature :30C (86.00 F)
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Port-1 :
Port status: Active
Port's Linkspeed: Unknown
Drive has flagged a S.M.A.R.T alert : No
Enclosure Device ID: 32
Slot Number: 3
Drive's position: DiskGroup: 4, Span: 0, Arm: 0
Enclosure position: 1
Device Id: 3
WWN: 50014EE55556F5F1
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
Raw Size: 2.728 TB [0x15d50a3b0 Sectors]
Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors]
Coerced Size: 2.728 TB [0x15d400000 Sectors]
Sector Size: 0
Firmware state: Online, Spun Up
Device Firmware Level: D1R2
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x50014ee55556f5f2
SAS Address(1): 0x0
Connected Port Number: 0(path0)
Inquiry Data: WD WD3001FYYG D1R2WCC1F0027341
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Hard Disk Device
Drive Temperature :29C (84.20 F)
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Port-1 :
Port status: Active
Port's Linkspeed: Unknown
Drive has flagged a S.M.A.R.T alert : No
Enclosure Device ID: 32
Slot Number: 4
Drive's position: DiskGroup: 11, Span: 0, Arm: 0
Enclosure position: 1
Device Id: 4
WWN: 5000C50084AF7558
Sequence Number: 8
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
Raw Size: 2.728 TB [0x15d50a3b0 Sectors]
Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors]
Coerced Size: 2.728 TB [0x15d400000 Sectors]
Sector Size: 0
Firmware state: Online, Spun Up
Device Firmware Level: GS10
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x5000c50084af7559
SAS Address(1): 0x0
Connected Port Number: 0(path0)
Inquiry Data: SEAGATE ST3000NM0023 GS10Z1Y3Z7MA
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Hard Disk Device
Drive Temperature :31C (87.80 F)
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Port-1 :
Port status: Active
Port's Linkspeed: Unknown
Drive has flagged a S.M.A.R.T alert : No
Enclosure Device ID: 32
Slot Number: 5
Drive's position: DiskGroup: 5, Span: 0, Arm: 0
Enclosure position: 1
Device Id: 5
WWN: 50014EE5555768A9
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
Raw Size: 2.728 TB [0x15d50a3b0 Sectors]
Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors]
Coerced Size: 2.728 TB [0x15d400000 Sectors]
Sector Size: 0
Firmware state: Online, Spun Up
Device Firmware Level: D1R2
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x50014ee5555768aa
SAS Address(1): 0x0
Connected Port Number: 0(path0)
Inquiry Data: WD WD3001FYYG D1R2WCC1F0027451
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Hard Disk Device
Drive Temperature :30C (86.00 F)
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Port-1 :
Port status: Active
Port's Linkspeed: Unknown
Drive has flagged a S.M.A.R.T alert : No
Enclosure Device ID: 32
Slot Number: 6
Enclosure position: 1
Device Id: 6
WWN: 500003972838062C
Sequence Number: 7
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
Raw Size: 3.638 TB [0x1d1c0beb0 Sectors]
Non Coerced Size: 3.637 TB [0x1d1b0beb0 Sectors]
Coerced Size: 3.637 TB [0x1d1b00000 Sectors]
Sector Size: 0
Firmware state: Unconfigured(good), Spun Up
Device Firmware Level: DS06
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x500003972838062e
SAS Address(1): 0x0
Connected Port Number: 0(path0)
Inquiry Data: TOSHIBA MG04SCA40EN DS0676L0A075FVNC
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: Unknown
Link Speed: 6.0Gb/s
Media Type: Hard Disk Device
Drive Temperature :27C (80.60 F)
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Port-1 :
Port status: Active
Port's Linkspeed: Unknown
Drive has flagged a S.M.A.R.T alert : No
Enclosure Device ID: 32
Slot Number: 7
Drive's position: DiskGroup: 6, Span: 0, Arm: 0
Enclosure position: 1
Device Id: 7
WWN: 50014EE55557359D
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
Raw Size: 2.728 TB [0x15d50a3b0 Sectors]
Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors]
Coerced Size: 2.728 TB [0x15d400000 Sectors]
Sector Size: 0
Firmware state: Online, Spun Up
Device Firmware Level: D1R2
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x50014ee55557359e
SAS Address(1): 0x0
Connected Port Number: 0(path0)
Inquiry Data: WD WD3001FYYG D1R2WCC1F0070219
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Hard Disk Device
Drive Temperature :30C (86.00 F)
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Port-1 :
Port status: Active
Port's Linkspeed: Unknown
Drive has flagged a S.M.A.R.T alert : No
Enclosure Device ID: 32
Slot Number: 8
Drive's position: DiskGroup: 7, Span: 0, Arm: 0
Enclosure position: 1
Device Id: 8
WWN: 50014EE5AAAC2C09
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
Raw Size: 2.728 TB [0x15d50a3b0 Sectors]
Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors]
Coerced Size: 2.728 TB [0x15d400000 Sectors]
Sector Size: 0
Firmware state: Online, Spun Up
Device Firmware Level: D1R2
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x50014ee5aaac2c0a
SAS Address(1): 0x0
Connected Port Number: 0(path0)
Inquiry Data: WD WD3001FYYG D1R2WCC1F0088217
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Hard Disk Device
Drive Temperature :31C (87.80 F)
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Port-1 :
Port status: Active
Port's Linkspeed: Unknown
Drive has flagged a S.M.A.R.T alert : No
Enclosure Device ID: 32
Slot Number: 9
Drive's position: DiskGroup: 10, Span: 0, Arm: 0
Enclosure position: 1
Device Id: 9
WWN: 5000C50084A9E584
Sequence Number: 12
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
Raw Size: 2.728 TB [0x15d50a3b0 Sectors]
Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors]
Coerced Size: 2.728 TB [0x15d400000 Sectors]
Sector Size: 0
Firmware state: Online, Spun Up
Device Firmware Level: GS10
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x5000c50084a9e585
SAS Address(1): 0x0
Connected Port Number: 0(path0)
Inquiry Data: SEAGATE ST3000NM0023 GS10Z1Y3X7J5
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Hard Disk Device
Drive Temperature :31C (87.80 F)
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Port-1 :
Port status: Active
Port's Linkspeed: Unknown
Drive has flagged a S.M.A.R.T alert : No
Enclosure Device ID: 32
Slot Number: 10
Drive's position: DiskGroup: 8, Span: 0, Arm: 0
Enclosure position: 1
Device Id: 10
WWN: 50014EE5AAAB7151
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
Raw Size: 2.728 TB [0x15d50a3b0 Sectors]
Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors]
Coerced Size: 2.728 TB [0x15d400000 Sectors]
Sector Size: 0
Firmware state: Online, Spun Up
Device Firmware Level: D1R2
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x50014ee5aaab7152
SAS Address(1): 0x0
Connected Port Number: 0(path0)
Inquiry Data: WD WD3001FYYG D1R2WCC1F0021668
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Hard Disk Device
Drive Temperature :32C (89.60 F)
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Port-1 :
Port status: Active
Port's Linkspeed: Unknown
Drive has flagged a S.M.A.R.T alert : No
Enclosure Device ID: 32
Slot Number: 11
Drive's position: DiskGroup: 9, Span: 0, Arm: 0
Enclosure position: 1
Device Id: 11
WWN: 50014EE5AAAC2B81
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
Raw Size: 2.728 TB [0x15d50a3b0 Sectors]
Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors]
Coerced Size: 2.728 TB [0x15d400000 Sectors]
Sector Size: 0
Firmware state: Online, Spun Up
Device Firmware Level: D1R2
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x50014ee5aaac2b82
SAS Address(1): 0x0
Connected Port Number: 0(path0)
Inquiry Data: WD WD3001FYYG D1R2WCC1F0095409
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Hard Disk Device
Drive Temperature :31C (87.80 F)
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Port-1 :
Port status: Active
Port's Linkspeed: Unknown
Drive has flagged a S.M.A.R.T alert : No
Enclosure Device ID: 32
Slot Number: 12
Drive's position: DiskGroup: 0, Span: 0, Arm: 0
Enclosure position: 1
Device Id: 12
WWN: 5000C5005EFECCD4
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
Raw Size: 136.732 GB [0x11177328 Sectors]
Non Coerced Size: 136.232 GB [0x11077328 Sectors]
Coerced Size: 136.125 GB [0x11040000 Sectors]
Sector Size: 0
Firmware state: Online, Spun Up
Device Firmware Level: YS09
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x5000c5005efeccd5
SAS Address(1): 0x0
Connected Port Number: 0(path0)
Inquiry Data: SEAGATE ST9146853SS YS096XM22WTQ
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Hard Disk Device
Drive Temperature :48C (118.40 F)
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Port-1 :
Port status: Active
Port's Linkspeed: Unknown
Drive has flagged a S.M.A.R.T alert : No
Enclosure Device ID: 32
Slot Number: 13
Drive's position: DiskGroup: 0, Span: 0, Arm: 1
Enclosure position: 1
Device Id: 13
WWN: 5000C5005EFEFE84
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SAS
Raw Size: 136.732 GB [0x11177328 Sectors]
Non Coerced Size: 136.232 GB [0x11077328 Sectors]
Coerced Size: 136.125 GB [0x11040000 Sectors]
Sector Size: 0
Firmware state: Online, Spun Up
Device Firmware Level: YS09
Shield Counter: 0
Successful diagnostics completion on : N/A
SAS Address(0): 0x5000c5005efefe85
SAS Address(1): 0x0
Connected Port Number: 0(path0)
Inquiry Data: SEAGATE ST9146853SS YS096XM22X09
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Hard Disk Device
Drive Temperature :47C (116.60 F)
PI Eligibility: No
Drive is formatted for PI information: No
PI: No PI
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Port-1 :
Port status: Active
Port's Linkspeed: Unknown
Drive has flagged a S.M.A.R.T alert : No
Exit Code: 0x00
The Enclosure Device ID:Slot Number
for the new disk is... 32:6
new_hdd_location = (32, 6)
Checking all Logical Drives...
!ansible -b -a '/opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -Lall -a0' -i {prereq_path}/current/hosts all
XXX.XXX.XXX.233 | SUCCESS | rc=0 >>
Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name :
RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0
Size : 136.125 GB
Sector Size : 512
Mirror Data : 136.125 GB
State : Optimal
Strip Size : 64 KB
Number Of Drives : 2
Span Depth : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Disk's Default
Encryption Type : None
Default Power Savings Policy: Controller Defined
Current Power Savings Policy: None
Can spin up in 1 minute: Yes
LD has drives that support T10 power conditions: Yes
LD's IO profile supports MAX power savings with cached writes: No
Bad Blocks Exist: No
Is VD Cached: Yes
Cache Cade Type : Read Only
Virtual Drive: 1 (Target Id: 1)
Name :
RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0
Size : 2.728 TB
Sector Size : 512
Parity Size : 0
State : Optimal
Strip Size : 64 KB
Number Of Drives : 1
Span Depth : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Disk's Default
Encryption Type : None
Default Power Savings Policy: Controller Defined
Current Power Savings Policy: None
Can spin up in 1 minute: Yes
LD has drives that support T10 power conditions: Yes
LD's IO profile supports MAX power savings with cached writes: No
Bad Blocks Exist: No
Is VD Cached: Yes
Cache Cade Type : Read Only
Virtual Drive: 2 (Target Id: 2)
Name :
RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0
Size : 2.728 TB
Sector Size : 512
Parity Size : 0
State : Optimal
Strip Size : 64 KB
Number Of Drives : 1
Span Depth : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Disk's Default
Encryption Type : None
Default Power Savings Policy: Controller Defined
Current Power Savings Policy: None
Can spin up in 1 minute: Yes
LD has drives that support T10 power conditions: Yes
LD's IO profile supports MAX power savings with cached writes: No
Bad Blocks Exist: No
Is VD Cached: Yes
Cache Cade Type : Read Only
Virtual Drive: 3 (Target Id: 3)
Name :
RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0
Size : 2.728 TB
Sector Size : 512
Parity Size : 0
State : Optimal
Strip Size : 64 KB
Number Of Drives : 1
Span Depth : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Disk's Default
Encryption Type : None
Default Power Savings Policy: Controller Defined
Current Power Savings Policy: None
Can spin up in 1 minute: Yes
LD has drives that support T10 power conditions: Yes
LD's IO profile supports MAX power savings with cached writes: No
Bad Blocks Exist: No
Is VD Cached: Yes
Cache Cade Type : Read Only
Virtual Drive: 4 (Target Id: 4)
Name :
RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0
Size : 2.728 TB
Sector Size : 512
Parity Size : 0
State : Optimal
Strip Size : 64 KB
Number Of Drives : 1
Span Depth : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Disk's Default
Encryption Type : None
Default Power Savings Policy: Controller Defined
Current Power Savings Policy: None
Can spin up in 1 minute: Yes
LD has drives that support T10 power conditions: Yes
LD's IO profile supports MAX power savings with cached writes: No
Bad Blocks Exist: No
Is VD Cached: Yes
Cache Cade Type : Read Only
Virtual Drive: 5 (Target Id: 5)
Name :
RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0
Size : 2.728 TB
Sector Size : 512
Parity Size : 0
State : Optimal
Strip Size : 64 KB
Number Of Drives : 1
Span Depth : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Disk's Default
Encryption Type : None
Default Power Savings Policy: Controller Defined
Current Power Savings Policy: None
Can spin up in 1 minute: Yes
LD has drives that support T10 power conditions: Yes
LD's IO profile supports MAX power savings with cached writes: No
Bad Blocks Exist: No
Is VD Cached: Yes
Cache Cade Type : Read Only
Virtual Drive: 6 (Target Id: 6)
Name :
RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0
Size : 2.728 TB
Sector Size : 512
Parity Size : 0
State : Optimal
Strip Size : 64 KB
Number Of Drives : 1
Span Depth : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Disk's Default
Encryption Type : None
Default Power Savings Policy: Controller Defined
Current Power Savings Policy: None
Can spin up in 1 minute: Yes
LD has drives that support T10 power conditions: Yes
LD's IO profile supports MAX power savings with cached writes: No
Bad Blocks Exist: No
Is VD Cached: Yes
Cache Cade Type : Read Only
Virtual Drive: 8 (Target Id: 8)
Name :
RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0
Size : 2.728 TB
Sector Size : 512
Parity Size : 0
State : Optimal
Strip Size : 64 KB
Number Of Drives : 1
Span Depth : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Disk's Default
Encryption Type : None
Default Power Savings Policy: Controller Defined
Current Power Savings Policy: None
Can spin up in 1 minute: Yes
LD has drives that support T10 power conditions: Yes
LD's IO profile supports MAX power savings with cached writes: No
Bad Blocks Exist: No
Is VD Cached: Yes
Cache Cade Type : Read Only
Virtual Drive: 9 (Target Id: 9)
Name :
RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0
Size : 2.728 TB
Sector Size : 512
Parity Size : 0
State : Optimal
Strip Size : 64 KB
Number Of Drives : 1
Span Depth : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Disk's Default
Encryption Type : None
Default Power Savings Policy: Controller Defined
Current Power Savings Policy: None
Can spin up in 1 minute: Yes
LD has drives that support T10 power conditions: Yes
LD's IO profile supports MAX power savings with cached writes: No
Bad Blocks Exist: No
Is VD Cached: Yes
Cache Cade Type : Read Only
Virtual Drive: 10 (Target Id: 10)
Name :
RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0
Size : 2.728 TB
Sector Size : 512
Parity Size : 0
State : Optimal
Strip Size : 64 KB
Number Of Drives : 1
Span Depth : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Disk's Default
Encryption Type : None
Default Power Savings Policy: Controller Defined
Current Power Savings Policy: None
Can spin up in 1 minute: Yes
LD has drives that support T10 power conditions: Yes
LD's IO profile supports MAX power savings with cached writes: No
Bad Blocks Exist: No
Is VD Cached: Yes
Cache Cade Type : Read Only
Virtual Drive: 11 (Target Id: 11)
Name :
RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0
Size : 2.728 TB
Sector Size : 512
Parity Size : 0
State : Optimal
Strip Size : 64 KB
Number Of Drives : 1
Span Depth : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Disk's Default
Encryption Type : None
Default Power Savings Policy: Controller Defined
Current Power Savings Policy: None
Can spin up in 1 minute: Yes
LD has drives that support T10 power conditions: Yes
LD's IO profile supports MAX power savings with cached writes: No
Bad Blocks Exist: No
Is VD Cached: Yes
Cache Cade Type : Read Only
Virtual Drive: 12 (Target Id: 12)
Name :
RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0
Size : 2.728 TB
Sector Size : 512
Parity Size : 0
State : Optimal
Strip Size : 64 KB
Number Of Drives : 1
Span Depth : 1
Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy : Disk's Default
Encryption Type : None
Default Power Savings Policy: Controller Defined
Current Power Savings Policy: None
Can spin up in 1 minute: Yes
LD has drives that support T10 power conditions: Yes
LD's IO profile supports MAX power savings with cached writes: No
Bad Blocks Exist: No
Is VD Cached: Yes
Cache Cade Type : Read Only
Exit Code: 0x00
Confirmed that Logical Device 7 is missing.
Add the new disk as a logical disk
!ansible -b -a '/opt/MegaRAID/MegaCli/MegaCli64 -CfgLdAdd -r0 [{new_hdd_location[0]}:{new_hdd_location[1]}] -a0' -i {prereq_path}/current/hosts all
XXX.XXX.XXX.233 | FAILED | rc=84 >>
Adapter 0: Configure Adapter Failed
FW error description:
The current operation is not allowed because the controller has data in cache for offline or missing virtual drives.
Exit Code: 0x54
Oops, the preserved cache remains... OK, let's remove it.
Set the old LD.
previous_ld_number = 7
!ansible -b -a '/opt/MegaRAID/MegaCli/MegaCli64 -DiscardPreservedCache -L{previous_ld_number} -a0' -i {prereq_path}/current/hosts all
XXX.XXX.XXX.233 | SUCCESS | rc=0 >>
Adapter #0
Virtual Drive(Target ID 07): Preserved Cache Data Cleared.
Exit Code: 0x00
OK, try to create LD...
!ansible -b -a '/opt/MegaRAID/MegaCli/MegaCli64 -CfgLdAdd -r0 [{new_hdd_location[0]}:{new_hdd_location[1]}] -a0' -i {prereq_path}/current/hosts all
XXX.XXX.XXX.233 | SUCCESS | rc=0 >>
Adapter 0: Created VD 7
Adapter 0: Configured the Adapter!!
Exit Code: 0x00
np...! The number of new VD is 7
Then, I'm going to reboot the machine and recreate all volumes. Rainy, let's dry run...
!ansible -CDv -b -a 'reboot' -i {prereq_path}/current/hosts all
Using /etc/ansible/ansible.cfg as config file
XXX.XXX.XXX.233 | SKIPPED
OK, reboot it...!
!ansible -b -a 'reboot' -i {prereq_path}/current/hosts all
XXX.XXX.XXX.233 | SUCCESS | rc=0 >>
Wait for the machine...
!ping -c 4 {host_new_machine}
PING XXX.XXX.XXX.233 (XXX.XXX.XXX.233) 56(84) bytes of data. 64 bytes from XXX.XXX.XXX.233: icmp_seq=1 ttl=63 time=0.365 ms 64 bytes from XXX.XXX.XXX.233: icmp_seq=2 ttl=63 time=0.323 ms 64 bytes from XXX.XXX.XXX.233: icmp_seq=3 ttl=63 time=0.290 ms 64 bytes from XXX.XXX.XXX.233: icmp_seq=4 ttl=63 time=0.350 ms --- XXX.XXX.XXX.233 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 2997ms rtt min/avg/max/mdev = 0.290/0.332/0.365/0.028 ms
!ansible -m ping {host_new_machine}
XXX.XXX.XXX.233 | SUCCESS => {
"changed": false,
"ping": "pong"
}
Review mountpoints...
!ansible -a 'df -H' -i {prereq_path}/current/hosts all
XXX.XXX.XXX.233 | SUCCESS | rc=0 >>
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 14G 4.0G 9.3G 30% /
tmpfs 34G 0 34G 0% /dev/shm
/dev/sda5 726M 180M 509M 27% /mnt
/dev/sdb1 3.0T 7.1G 2.8T 1% /var/log
/dev/sdc1 3.0T 19G 2.8T 1% /hadoop/tmp
/dev/sdd1 3.0T 46G 2.8T 2% /hadoop/data01
/dev/sde1 3.0T 51G 2.8T 2% /hadoop/data02
/dev/sdk1 3.0T 34G 2.8T 2% /hadoop/data03
/dev/sdg1 3.0T 48G 2.8T 2% /hadoop/data04
/dev/sdi1 3.0T 48G 2.8T 2% /hadoop/data06
/dev/sdj1 3.0T 48G 2.8T 2% /hadoop/data07
/dev/sdl1 3.0T 50G 2.8T 2% /hadoop/data09
/dev/sdm1 3.0T 50G 2.8T 2% /hadoop/data10
import yaml
with open(os.path.join(prereq_path, 'original', 'group_vars', target_hosts[0]['Type']), 'r') as f:
group_vars = yaml.load(f)
group_vars
{'bonding_nic_1': '', 'bonding_nic_2': '', 'hdd_devices_and_mountpoints': [{'device': '/dev/sdc', 'mount': '/hadoop/tmp'}, {'device': '/dev/sdd', 'mount': '/hadoop/data01'}, {'device': '/dev/sde', 'mount': '/hadoop/data02'}, {'device': '/dev/sdf', 'mount': '/hadoop/data03'}, {'device': '/dev/sdg', 'mount': '/hadoop/data04'}, {'device': '/dev/sdh', 'mount': '/hadoop/data05'}, {'device': '/dev/sdi', 'mount': '/hadoop/data06'}, {'device': '/dev/sdj', 'mount': '/hadoop/data07'}, {'device': '/dev/sdk', 'mount': '/hadoop/data08'}, {'device': '/dev/sdl', 'mount': '/hadoop/data09'}, {'device': '/dev/sdm', 'mount': '/hadoop/data10'}], 'log_device': '/dev/sdb', 'server_nic_type': 'sn0203', 'server_type': 'sn'}
!mkdir -p {prereq_path}/current/group_vars
del group_vars['log_device']
with open(os.path.join(prereq_path, 'current', 'group_vars', target_hosts[0]['Type']), 'w') as f:
f.write(yaml.dump(group_vars))
!cat {prereq_path}/current/group_vars/{target_hosts[0]['Type']}
bonding_nic_1: '' bonding_nic_2: '' hdd_devices_and_mountpoints: - {device: /dev/sdc, mount: /hadoop/tmp} - {device: /dev/sdd, mount: /hadoop/data01} - {device: /dev/sde, mount: /hadoop/data02} - {device: /dev/sdf, mount: /hadoop/data03} - {device: /dev/sdg, mount: /hadoop/data04} - {device: /dev/sdh, mount: /hadoop/data05} - {device: /dev/sdi, mount: /hadoop/data06} - {device: /dev/sdj, mount: /hadoop/data07} - {device: /dev/sdk, mount: /hadoop/data08} - {device: /dev/sdl, mount: /hadoop/data09} - {device: /dev/sdm, mount: /hadoop/data10} server_nic_type: sn0203 server_type: sn
Remove all files on all volumes...
for hdd in group_vars['hdd_devices_and_mountpoints']:
if hdd['mount'] == '/hadoop/tmp':
dirs = os.path.join(hdd['mount'], 'hadoop-yarn')
else:
dirs = os.path.join(hdd['mount'], 'dfs')
!ansible -b -a 'rm -fr {dirs}' -i {prereq_path}/current/hosts all
XXX.XXX.XXX.233 | SUCCESS | rc=0 >> XXX.XXX.XXX.233 | SUCCESS | rc=0 >> XXX.XXX.XXX.233 | SUCCESS | rc=0 >> XXX.XXX.XXX.233 | SUCCESS | rc=0 >> XXX.XXX.XXX.233 | SUCCESS | rc=0 >> XXX.XXX.XXX.233 | SUCCESS | rc=0 >> XXX.XXX.XXX.233 | SUCCESS | rc=0 >> XXX.XXX.XXX.233 | SUCCESS | rc=0 >> XXX.XXX.XXX.233 | SUCCESS | rc=0 >> XXX.XXX.XXX.233 | SUCCESS | rc=0 >> XXX.XXX.XXX.233 | SUCCESS | rc=0 >>
!cp {prereq_path}/original/playbooks/destructive/clean-hdds.yml {prereq_path}/current/
First, let's dry run...!
!ansible-playbook -CDv -i {prereq_path}/current/hosts {prereq_path}/current/clean-hdds.yml
Using /etc/ansible/ansible.cfg as config file [DEPRECATION WARNING]: Instead of sudo/sudo_user, use become/become_user and make sure become_method is 'sudo' (default). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [XXX.XXX.XXX.233] TASK [install gdisk package.] ************************************************** ok: [XXX.XXX.XXX.233] => {"changed": false, "msg": "", "rc": 0, "results": ["gdisk-0.8.10-1.el6.x86_64 providing gdisk is already installed"]} TASK [unmount disks to be formatted.] ****************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [zap GPT and MBR data structure of HDD.] ********************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [create partition.] ******************************************************* skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [unmount disks to be formatted.] ****************************************** [DEPRECATION WARNING]: Using bare variables is deprecated. Update your playbooks so that the environment value uses the full variable syntax ('{{hdd_devices_and_mountpoints}}'). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdc', u'mount': u'/hadoop/tmp'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdc", "mount": "/hadoop/tmp"}, "name": "/hadoop/tmp", "src": "/dev/sdc1"} changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdd', u'mount': u'/hadoop/data01'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdd", "mount": "/hadoop/data01"}, "name": "/hadoop/data01", "src": "/dev/sdd1"} changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sde', u'mount': u'/hadoop/data02'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sde", "mount": "/hadoop/data02"}, "name": "/hadoop/data02", "src": "/dev/sde1"} changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdf', u'mount': u'/hadoop/data03'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdf", "mount": "/hadoop/data03"}, "name": "/hadoop/data03", "src": "/dev/sdf1"} changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdg', u'mount': u'/hadoop/data04'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdg", "mount": "/hadoop/data04"}, "name": "/hadoop/data04", "src": "/dev/sdg1"} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdh', u'mount': u'/hadoop/data05'}) => {"changed": false, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdh", "mount": "/hadoop/data05"}, "name": "/hadoop/data05", "src": "/dev/sdh1"} changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdi', u'mount': u'/hadoop/data06'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdi", "mount": "/hadoop/data06"}, "name": "/hadoop/data06", "src": "/dev/sdi1"} changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdj', u'mount': u'/hadoop/data07'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdj", "mount": "/hadoop/data07"}, "name": "/hadoop/data07", "src": "/dev/sdj1"} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdk', u'mount': u'/hadoop/data08'}) => {"changed": false, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdk", "mount": "/hadoop/data08"}, "name": "/hadoop/data08", "src": "/dev/sdk1"} changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdl', u'mount': u'/hadoop/data09'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdl", "mount": "/hadoop/data09"}, "name": "/hadoop/data09", "src": "/dev/sdl1"} changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdm', u'mount': u'/hadoop/data10'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdm", "mount": "/hadoop/data10"}, "name": "/hadoop/data10", "src": "/dev/sdm1"} TASK [zap GPT and MBR data structure of HDD.] ********************************** [DEPRECATION WARNING]: Using bare variables is deprecated. Update your playbooks so that the environment value uses the full variable syntax ('{{hdd_devices_and_mountpoints}}'). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdc', u'mount': u'/hadoop/tmp'}) => {"changed": false, "item": {"device": "/dev/sdc", "mount": "/hadoop/tmp"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdd', u'mount': u'/hadoop/data01'}) => {"changed": false, "item": {"device": "/dev/sdd", "mount": "/hadoop/data01"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sde', u'mount': u'/hadoop/data02'}) => {"changed": false, "item": {"device": "/dev/sde", "mount": "/hadoop/data02"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdf', u'mount': u'/hadoop/data03'}) => {"changed": false, "item": {"device": "/dev/sdf", "mount": "/hadoop/data03"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdg', u'mount': u'/hadoop/data04'}) => {"changed": false, "item": {"device": "/dev/sdg", "mount": "/hadoop/data04"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdh', u'mount': u'/hadoop/data05'}) => {"changed": false, "item": {"device": "/dev/sdh", "mount": "/hadoop/data05"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdi', u'mount': u'/hadoop/data06'}) => {"changed": false, "item": {"device": "/dev/sdi", "mount": "/hadoop/data06"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdj', u'mount': u'/hadoop/data07'}) => {"changed": false, "item": {"device": "/dev/sdj", "mount": "/hadoop/data07"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdk', u'mount': u'/hadoop/data08'}) => {"changed": false, "item": {"device": "/dev/sdk", "mount": "/hadoop/data08"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdl', u'mount': u'/hadoop/data09'}) => {"changed": false, "item": {"device": "/dev/sdl", "mount": "/hadoop/data09"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdm', u'mount': u'/hadoop/data10'}) => {"changed": false, "item": {"device": "/dev/sdm", "mount": "/hadoop/data10"}, "msg": "remote module does not support check mode", "skipped": true} TASK [create partition.] ******************************************************* [DEPRECATION WARNING]: Using bare variables is deprecated. Update your playbooks so that the environment value uses the full variable syntax ('{{hdd_devices_and_mountpoints}}'). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdc', u'mount': u'/hadoop/tmp'}) => {"changed": false, "item": {"device": "/dev/sdc", "mount": "/hadoop/tmp"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdd', u'mount': u'/hadoop/data01'}) => {"changed": false, "item": {"device": "/dev/sdd", "mount": "/hadoop/data01"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sde', u'mount': u'/hadoop/data02'}) => {"changed": false, "item": {"device": "/dev/sde", "mount": "/hadoop/data02"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdf', u'mount': u'/hadoop/data03'}) => {"changed": false, "item": {"device": "/dev/sdf", "mount": "/hadoop/data03"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdg', u'mount': u'/hadoop/data04'}) => {"changed": false, "item": {"device": "/dev/sdg", "mount": "/hadoop/data04"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdh', u'mount': u'/hadoop/data05'}) => {"changed": false, "item": {"device": "/dev/sdh", "mount": "/hadoop/data05"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdi', u'mount': u'/hadoop/data06'}) => {"changed": false, "item": {"device": "/dev/sdi", "mount": "/hadoop/data06"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdj', u'mount': u'/hadoop/data07'}) => {"changed": false, "item": {"device": "/dev/sdj", "mount": "/hadoop/data07"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdk', u'mount': u'/hadoop/data08'}) => {"changed": false, "item": {"device": "/dev/sdk", "mount": "/hadoop/data08"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdl', u'mount': u'/hadoop/data09'}) => {"changed": false, "item": {"device": "/dev/sdl", "mount": "/hadoop/data09"}, "msg": "remote module does not support check mode", "skipped": true} skipping: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdm', u'mount': u'/hadoop/data10'}) => {"changed": false, "item": {"device": "/dev/sdm", "mount": "/hadoop/data10"}, "msg": "remote module does not support check mode", "skipped": true} PLAY RECAP ********************************************************************* XXX.XXX.XXX.233 : ok=3 changed=1 unreachable=0 failed=0
!ansible-playbook -i {prereq_path}/current/hosts {prereq_path}/current/clean-hdds.yml
[DEPRECATION WARNING]: Instead of sudo/sudo_user, use become/become_user and make sure become_method is 'sudo' (default). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [XXX.XXX.XXX.233] TASK [install gdisk package.] ************************************************** ok: [XXX.XXX.XXX.233] TASK [unmount disks to be formatted.] ****************************************** skipping: [XXX.XXX.XXX.233] TASK [zap GPT and MBR data structure of HDD.] ********************************** skipping: [XXX.XXX.XXX.233] TASK [create partition.] ******************************************************* skipping: [XXX.XXX.XXX.233] TASK [unmount disks to be formatted.] ****************************************** [DEPRECATION WARNING]: Using bare variables is deprecated. Update your playbooks so that the environment value uses the full variable syntax ('{{hdd_devices_and_mountpoints}}'). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdc', u'mount': u'/hadoop/tmp'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdd', u'mount': u'/hadoop/data01'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sde', u'mount': u'/hadoop/data02'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdf', u'mount': u'/hadoop/data03'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdg', u'mount': u'/hadoop/data04'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdh', u'mount': u'/hadoop/data05'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdi', u'mount': u'/hadoop/data06'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdj', u'mount': u'/hadoop/data07'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdk', u'mount': u'/hadoop/data08'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdl', u'mount': u'/hadoop/data09'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdm', u'mount': u'/hadoop/data10'}) TASK [zap GPT and MBR data structure of HDD.] ********************************** [DEPRECATION WARNING]: Using bare variables is deprecated. Update your playbooks so that the environment value uses the full variable syntax ('{{hdd_devices_and_mountpoints}}'). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdc', u'mount': u'/hadoop/tmp'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdd', u'mount': u'/hadoop/data01'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sde', u'mount': u'/hadoop/data02'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdf', u'mount': u'/hadoop/data03'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdg', u'mount': u'/hadoop/data04'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdh', u'mount': u'/hadoop/data05'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdi', u'mount': u'/hadoop/data06'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdj', u'mount': u'/hadoop/data07'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdk', u'mount': u'/hadoop/data08'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdl', u'mount': u'/hadoop/data09'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdm', u'mount': u'/hadoop/data10'}) TASK [create partition.] ******************************************************* [DEPRECATION WARNING]: Using bare variables is deprecated. Update your playbooks so that the environment value uses the full variable syntax ('{{hdd_devices_and_mountpoints}}'). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdc', u'mount': u'/hadoop/tmp'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdd', u'mount': u'/hadoop/data01'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sde', u'mount': u'/hadoop/data02'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdf', u'mount': u'/hadoop/data03'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdg', u'mount': u'/hadoop/data04'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdh', u'mount': u'/hadoop/data05'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdi', u'mount': u'/hadoop/data06'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdj', u'mount': u'/hadoop/data07'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdk', u'mount': u'/hadoop/data08'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdl', u'mount': u'/hadoop/data09'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdm', u'mount': u'/hadoop/data10'}) PLAY RECAP ********************************************************************* XXX.XXX.XXX.233 : ok=5 changed=3 unreachable=0 failed=0
!cp -fr {prereq_path}/original/roles {prereq_path}/current/roles
%%writefile {prereq_path}/current/mount.yml
- hosts: all
become: yes
roles:
- mount
Writing /tmp/tmpwDTh_U/current/mount.yml
!ansible-playbook -CDv -i {prereq_path}/current/hosts {prereq_path}/current/mount.yml
Using /etc/ansible/ansible.cfg as config file PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [XXX.XXX.XXX.233] TASK [mount : mkfs for /var/log] *********************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [mount : create a temporary directory for previous /var/log] ************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [mount : copy files in /var/log to /tmp/old-log to keep logs] ************* skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [mount : mount /var/log] ************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [mount : make filesystem on devices] ************************************** [DEPRECATION WARNING]: Using bare variables is deprecated. Update your playbooks so that the environment value uses the full variable syntax ('{{hdd_devices_and_mountpoints}}'). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdc', u'mount': u'/hadoop/tmp'}) => {"changed": false, "item": {"device": "/dev/sdc", "mount": "/hadoop/tmp"}} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdd', u'mount': u'/hadoop/data01'}) => {"changed": false, "item": {"device": "/dev/sdd", "mount": "/hadoop/data01"}} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sde', u'mount': u'/hadoop/data02'}) => {"changed": false, "item": {"device": "/dev/sde", "mount": "/hadoop/data02"}} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdf', u'mount': u'/hadoop/data03'}) => {"changed": false, "item": {"device": "/dev/sdf", "mount": "/hadoop/data03"}} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdg', u'mount': u'/hadoop/data04'}) => {"changed": false, "item": {"device": "/dev/sdg", "mount": "/hadoop/data04"}} changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdh', u'mount': u'/hadoop/data05'}) => {"changed": true, "item": {"device": "/dev/sdh", "mount": "/hadoop/data05"}} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdi', u'mount': u'/hadoop/data06'}) => {"changed": false, "item": {"device": "/dev/sdi", "mount": "/hadoop/data06"}} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdj', u'mount': u'/hadoop/data07'}) => {"changed": false, "item": {"device": "/dev/sdj", "mount": "/hadoop/data07"}} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdk', u'mount': u'/hadoop/data08'}) => {"changed": false, "item": {"device": "/dev/sdk", "mount": "/hadoop/data08"}} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdl', u'mount': u'/hadoop/data09'}) => {"changed": false, "item": {"device": "/dev/sdl", "mount": "/hadoop/data09"}} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdm', u'mount': u'/hadoop/data10'}) => {"changed": false, "item": {"device": "/dev/sdm", "mount": "/hadoop/data10"}} TASK [mount : ensure /hadoop/dataX directory for mount point is present on SN.] [DEPRECATION WARNING]: Using bare variables is deprecated. Update your playbooks so that the environment value uses the full variable syntax ('{{hdd_devices_and_mountpoints}}'). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdc', u'mount': u'/hadoop/tmp'}) => {"changed": false, "gid": 0, "group": "root", "item": {"device": "/dev/sdc", "mount": "/hadoop/tmp"}, "mode": "0755", "owner": "root", "path": "/hadoop/tmp", "size": 4096, "state": "directory", "uid": 0} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdd', u'mount': u'/hadoop/data01'}) => {"changed": false, "gid": 0, "group": "root", "item": {"device": "/dev/sdd", "mount": "/hadoop/data01"}, "mode": "0755", "owner": "root", "path": "/hadoop/data01", "size": 4096, "state": "directory", "uid": 0} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sde', u'mount': u'/hadoop/data02'}) => {"changed": false, "gid": 0, "group": "root", "item": {"device": "/dev/sde", "mount": "/hadoop/data02"}, "mode": "0755", "owner": "root", "path": "/hadoop/data02", "size": 4096, "state": "directory", "uid": 0} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdf', u'mount': u'/hadoop/data03'}) => {"changed": false, "gid": 0, "group": "root", "item": {"device": "/dev/sdf", "mount": "/hadoop/data03"}, "mode": "0755", "owner": "root", "path": "/hadoop/data03", "size": 4096, "state": "directory", "uid": 0} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdg', u'mount': u'/hadoop/data04'}) => {"changed": false, "gid": 0, "group": "root", "item": {"device": "/dev/sdg", "mount": "/hadoop/data04"}, "mode": "0755", "owner": "root", "path": "/hadoop/data04", "size": 4096, "state": "directory", "uid": 0} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdh', u'mount': u'/hadoop/data05'}) => {"changed": false, "gid": 0, "group": "root", "item": {"device": "/dev/sdh", "mount": "/hadoop/data05"}, "mode": "0755", "owner": "root", "path": "/hadoop/data05", "size": 4096, "state": "directory", "uid": 0} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdi', u'mount': u'/hadoop/data06'}) => {"changed": false, "gid": 0, "group": "root", "item": {"device": "/dev/sdi", "mount": "/hadoop/data06"}, "mode": "0755", "owner": "root", "path": "/hadoop/data06", "size": 4096, "state": "directory", "uid": 0} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdj', u'mount': u'/hadoop/data07'}) => {"changed": false, "gid": 0, "group": "root", "item": {"device": "/dev/sdj", "mount": "/hadoop/data07"}, "mode": "0755", "owner": "root", "path": "/hadoop/data07", "size": 4096, "state": "directory", "uid": 0} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdk', u'mount': u'/hadoop/data08'}) => {"changed": false, "gid": 0, "group": "root", "item": {"device": "/dev/sdk", "mount": "/hadoop/data08"}, "mode": "0755", "owner": "root", "path": "/hadoop/data08", "size": 4096, "state": "directory", "uid": 0} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdl', u'mount': u'/hadoop/data09'}) => {"changed": false, "gid": 0, "group": "root", "item": {"device": "/dev/sdl", "mount": "/hadoop/data09"}, "mode": "0755", "owner": "root", "path": "/hadoop/data09", "size": 4096, "state": "directory", "uid": 0} ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdm', u'mount': u'/hadoop/data10'}) => {"changed": false, "gid": 0, "group": "root", "item": {"device": "/dev/sdm", "mount": "/hadoop/data10"}, "mode": "0755", "owner": "root", "path": "/hadoop/data10", "size": 4096, "state": "directory", "uid": 0} TASK [mount : add mount information in fstab on SN.] *************************** [DEPRECATION WARNING]: Using bare variables is deprecated. Update your playbooks so that the environment value uses the full variable syntax ('{{hdd_devices_and_mountpoints}}'). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdc', u'mount': u'/hadoop/tmp'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdc", "mount": "/hadoop/tmp"}, "name": "/hadoop/tmp", "src": "/dev/sdc1"} changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdd', u'mount': u'/hadoop/data01'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdd", "mount": "/hadoop/data01"}, "name": "/hadoop/data01", "src": "/dev/sdd1"} changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sde', u'mount': u'/hadoop/data02'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sde", "mount": "/hadoop/data02"}, "name": "/hadoop/data02", "src": "/dev/sde1"} changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdf', u'mount': u'/hadoop/data03'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdf", "mount": "/hadoop/data03"}, "name": "/hadoop/data03", "src": "/dev/sdf1"} changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdg', u'mount': u'/hadoop/data04'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdg", "mount": "/hadoop/data04"}, "name": "/hadoop/data04", "src": "/dev/sdg1"} changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdh', u'mount': u'/hadoop/data05'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdh", "mount": "/hadoop/data05"}, "name": "/hadoop/data05", "src": "/dev/sdh1"} changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdi', u'mount': u'/hadoop/data06'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdi", "mount": "/hadoop/data06"}, "name": "/hadoop/data06", "src": "/dev/sdi1"} changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdj', u'mount': u'/hadoop/data07'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdj", "mount": "/hadoop/data07"}, "name": "/hadoop/data07", "src": "/dev/sdj1"} changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdk', u'mount': u'/hadoop/data08'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdk", "mount": "/hadoop/data08"}, "name": "/hadoop/data08", "src": "/dev/sdk1"} changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdl', u'mount': u'/hadoop/data09'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdl", "mount": "/hadoop/data09"}, "name": "/hadoop/data09", "src": "/dev/sdl1"} changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdm', u'mount': u'/hadoop/data10'}) => {"changed": true, "fstab": "/etc/fstab", "fstype": "ext4", "item": {"device": "/dev/sdm", "mount": "/hadoop/data10"}, "name": "/hadoop/data10", "src": "/dev/sdm1"} TASK [mount : ensure /hadoop/dataX directory for mount point is present on CN.] skipping: [XXX.XXX.XXX.233] => (item=/hadoop/data) => {"changed": false, "item": "/hadoop/data", "skip_reason": "Conditional check failed", "skipped": true} PLAY RECAP ********************************************************************* XXX.XXX.XXX.233 : ok=4 changed=2 unreachable=0 failed=0
!ansible-playbook -i {prereq_path}/current/hosts {prereq_path}/current/mount.yml
PLAY [all] ********************************************************************* TASK [setup] ******************************************************************* ok: [XXX.XXX.XXX.233] TASK [mount : mkfs for /var/log] *********************************************** skipping: [XXX.XXX.XXX.233] TASK [mount : create a temporary directory for previous /var/log] ************** skipping: [XXX.XXX.XXX.233] TASK [mount : copy files in /var/log to /tmp/old-log to keep logs] ************* skipping: [XXX.XXX.XXX.233] TASK [mount : mount /var/log] ************************************************** skipping: [XXX.XXX.XXX.233] TASK [mount : make filesystem on devices] ************************************** [DEPRECATION WARNING]: Using bare variables is deprecated. Update your playbooks so that the environment value uses the full variable syntax ('{{hdd_devices_and_mountpoints}}'). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdc', u'mount': u'/hadoop/tmp'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdd', u'mount': u'/hadoop/data01'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sde', u'mount': u'/hadoop/data02'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdf', u'mount': u'/hadoop/data03'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdg', u'mount': u'/hadoop/data04'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdh', u'mount': u'/hadoop/data05'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdi', u'mount': u'/hadoop/data06'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdj', u'mount': u'/hadoop/data07'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdk', u'mount': u'/hadoop/data08'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdl', u'mount': u'/hadoop/data09'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdm', u'mount': u'/hadoop/data10'}) TASK [mount : ensure /hadoop/dataX directory for mount point is present on SN.] [DEPRECATION WARNING]: Using bare variables is deprecated. Update your playbooks so that the environment value uses the full variable syntax ('{{hdd_devices_and_mountpoints}}'). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdc', u'mount': u'/hadoop/tmp'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdd', u'mount': u'/hadoop/data01'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sde', u'mount': u'/hadoop/data02'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdf', u'mount': u'/hadoop/data03'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdg', u'mount': u'/hadoop/data04'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdh', u'mount': u'/hadoop/data05'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdi', u'mount': u'/hadoop/data06'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdj', u'mount': u'/hadoop/data07'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdk', u'mount': u'/hadoop/data08'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdl', u'mount': u'/hadoop/data09'}) ok: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdm', u'mount': u'/hadoop/data10'}) TASK [mount : add mount information in fstab on SN.] *************************** [DEPRECATION WARNING]: Using bare variables is deprecated. Update your playbooks so that the environment value uses the full variable syntax ('{{hdd_devices_and_mountpoints}}'). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdc', u'mount': u'/hadoop/tmp'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdd', u'mount': u'/hadoop/data01'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sde', u'mount': u'/hadoop/data02'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdf', u'mount': u'/hadoop/data03'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdg', u'mount': u'/hadoop/data04'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdh', u'mount': u'/hadoop/data05'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdi', u'mount': u'/hadoop/data06'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdj', u'mount': u'/hadoop/data07'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdk', u'mount': u'/hadoop/data08'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdl', u'mount': u'/hadoop/data09'}) changed: [XXX.XXX.XXX.233] => (item={u'device': u'/dev/sdm', u'mount': u'/hadoop/data10'}) TASK [mount : ensure /hadoop/dataX directory for mount point is present on CN.] skipping: [XXX.XXX.XXX.233] => (item=/hadoop/data) PLAY RECAP ********************************************************************* XXX.XXX.XXX.233 : ok=4 changed=2 unreachable=0 failed=0
Review the mountpoint...
!ansible -a 'df -H' -i {prereq_path}/current/hosts all
XXX.XXX.XXX.233 | SUCCESS | rc=0 >>
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 14G 4.0G 9.3G 30% /
tmpfs 34G 0 34G 0% /dev/shm
/dev/sda5 726M 180M 509M 27% /mnt
/dev/sdb1 3.0T 7.1G 2.8T 1% /var/log
/dev/sdc1 3.0T 7.9G 2.8T 1% /hadoop/tmp
/dev/sdd1 3.0T 77M 2.9T 1% /hadoop/data01
/dev/sde1 3.0T 77M 2.9T 1% /hadoop/data02
/dev/sdf1 3.0T 22G 2.8T 1% /hadoop/data03
/dev/sdg1 3.0T 77M 2.9T 1% /hadoop/data04
/dev/sdh1 4.0T 72M 3.8T 1% /hadoop/data05
/dev/sdi1 3.0T 77M 2.9T 1% /hadoop/data06
/dev/sdj1 3.0T 77M 2.9T 1% /hadoop/data07
/dev/sdk1 3.0T 77M 2.9T 1% /hadoop/data08
/dev/sdl1 3.0T 77M 2.9T 1% /hadoop/data09
/dev/sdm1 3.0T 77M 2.9T 1% /hadoop/data10
Check the health of whole HDFS...
!ansible hadoop_client -s -U hdfs -a 'hdfs dfsadmin -report' -l {target_group}
XXX.XXX.XXX.200 | SUCCESS | rc=0 >>
Configured Capacity: 238194899124224 (216.64 TB)
Present Capacity: 226085043405016 (205.62 TB)
DFS Remaining: 222783090728298 (202.62 TB)
DFS Used: 3301952676718 (3.00 TB)
DFS Used%: 1.46%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 746
-------------------------------------------------
Live datanodes (8):
Name: XXX.XXX.XXX.226:1004 (sn02022001)
Hostname: sn02022001
Decommission Status : Normal
Configured Capacity: 29528238325760 (26.86 TB)
DFS Used: 518379108592 (482.78 GB)
Non DFS Used: 1501183849997 (1.37 TB)
DFS Remaining: 27508675367171 (25.02 TB)
DFS Used%: 1.76%
DFS Remaining%: 93.16%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 8
Last contact: Fri Sep 02 19:19:31 JST 2016
Name: XXX.XXX.XXX.232:1004 (sn02032001)
Hostname: sn02032001
Decommission Status : Normal
Configured Capacity: 29528238325760 (26.86 TB)
DFS Used: 342257353961 (318.75 GB)
Non DFS Used: 1501322791679 (1.37 TB)
DFS Remaining: 27684658180120 (25.18 TB)
DFS Used%: 1.16%
DFS Remaining%: 93.76%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 13
Last contact: Fri Sep 02 19:19:31 JST 2016
Name: XXX.XXX.XXX.234:1004 (sn02031201)
Hostname: sn02031201
Decommission Status : Normal
Configured Capacity: 29528238325760 (26.86 TB)
DFS Used: 545166445539 (507.73 GB)
Non DFS Used: 1501182690661 (1.37 TB)
DFS Remaining: 27481889189560 (24.99 TB)
DFS Used%: 1.85%
DFS Remaining%: 93.07%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 8
Last contact: Fri Sep 02 19:19:30 JST 2016
Name: XXX.XXX.XXX.228:1004 (sn02021201)
Hostname: sn02021201
Decommission Status : Normal
Configured Capacity: 29528238325760 (26.86 TB)
DFS Used: 511456102937 (476.33 GB)
Non DFS Used: 1501047593282 (1.37 TB)
DFS Remaining: 27515734629541 (25.03 TB)
DFS Used%: 1.73%
DFS Remaining%: 93.18%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 6
Last contact: Fri Sep 02 19:19:30 JST 2016
Name: XXX.XXX.XXX.231:1004 (sn02032401)
Hostname: sn02032401
Decommission Status : Normal
Configured Capacity: 31497230843904 (28.65 TB)
DFS Used: 157318606848 (146.51 GB)
Non DFS Used: 1601186369313 (1.46 TB)
DFS Remaining: 29738725867743 (27.05 TB)
DFS Used%: 0.50%
DFS Remaining%: 94.42%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 8
Last contact: Fri Sep 02 19:19:32 JST 2016
Name: XXX.XXX.XXX.236:1004 (sn02030401)
Hostname: sn02030401
Decommission Status : Normal
Configured Capacity: 29528238325760 (26.86 TB)
DFS Used: 146703651081 (136.63 GB)
Non DFS Used: 1501439077195 (1.37 TB)
DFS Remaining: 27880095597484 (25.36 TB)
DFS Used%: 0.50%
DFS Remaining%: 94.42%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 13
Last contact: Fri Sep 02 19:19:30 JST 2016
Name: XXX.XXX.XXX.225:1004 (sn02022401)
Hostname: sn02022401
Decommission Status : Normal
Configured Capacity: 29528238325760 (26.86 TB)
DFS Used: 532531852056 (495.96 GB)
Non DFS Used: 1501046517760 (1.37 TB)
DFS Remaining: 27494659955944 (25.01 TB)
DFS Used%: 1.80%
DFS Remaining%: 93.11%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 6
Last contact: Fri Sep 02 19:19:30 JST 2016
Name: XXX.XXX.XXX.230:1004 (sn02020401)
Hostname: sn02020401
Decommission Status : Normal
Configured Capacity: 29528238325760 (26.86 TB)
DFS Used: 548139555704 (510.49 GB)
Non DFS Used: 1501446829321 (1.37 TB)
DFS Remaining: 27478651940735 (24.99 TB)
DFS Used%: 1.86%
DFS Remaining%: 93.06%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 14
Last contact: Fri Sep 02 19:19:31 JST 2016
Dead datanodes (1):
Name: XXX.XXX.XXX.233:1004 (sn02031601)
Hostname: sn02031601
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 0 (0 B)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 0
Last contact: Thu Aug 25 04:05:38 JST 2016
Let's prepare the DataNode directories.
Ensure that the directory /hadoop/dataXX/dfs/datadir is present...
import os
import tempfile
work_dir = tempfile.mkdtemp()
work_dir
'/tmp/tmpP0U4b9'
!rm -fr {work_dir}/hadoop
!git clone https://github.com/NII-cloud-operation/Literate-computing-Hadoop.git {work_dir}/hadoop
!tree {work_dir}/hadoop
Cloning into '/tmp/tmpP0U4b9/hadoop'... remote: Counting objects: 849, done. remote: Total 849 (delta 0), reused 0 (delta 0), pack-reused 849 Receiving objects: 100% (849/849), 169.06 KiB | 267.00 KiB/s, done. Resolving deltas: 100% (272/272), done. Checking connectivity... done. /tmp/tmpP0U4b9/hadoop └── playbooks ├── conf_base.retry ├── conf_base.yml ├── conf_hdfs_base.yml ├── conf_hdfs_spark.yml ├── conf_hdfs_tez.yml ├── conf_hdfs_yarn.yml ├── conf_namenode_bootstrapstandby.yml ├── conf_tez.yml ├── enter_hdfs_safemode.yml ├── format_namenode.yml ├── group_vars │ └── all │ ├── base │ ├── cgroups │ ├── collect │ ├── f500.dumpall │ ├── hbase_master │ ├── hbase_regionserver │ ├── hcatalog │ ├── hdfs_base │ ├── hdfs_spark │ ├── hdfs_tez │ ├── hdfs_yarn │ ├── hive │ ├── httpfs │ ├── hue │ ├── java7 │ ├── java8 │ ├── journalnode │ ├── mapreduce_history │ ├── namenode │ ├── namenode_bootstrapstandby │ ├── namenode_format │ ├── os │ ├── pig │ ├── presto_client │ ├── presto_coordinator │ ├── presto_user │ ├── presto_worker │ ├── resourcemanager │ ├── site-defaults │ ├── slavenode │ ├── spark │ ├── spark_history │ ├── spark_user │ ├── storm │ ├── tez │ └── zookeeper_server ├── install-base.yml ├── install_client.yml ├── install_hbase_master.yml ├── install_hbase_regionserver.yml ├── install_hcatalog.yml ├── install_hive.yml ├── install_httpfs.yml ├── install_hue.yml ├── install_journalnode.yml ├── install_mapreduce_history.yml ├── install_namenode.yml ├── install_pig.yml ├── install_resourcemanager.yml ├── install_slavenode.yml ├── install_spark_historyserver.yml ├── install_spark.yml ├── install_timelineservice.yml ├── install_zookeeper.yml ├── roles │ ├── base │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── conf.yml │ │ │ ├── kerberos.yml │ │ │ ├── keytab.yml │ │ │ ├── main.yml │ │ │ ├── principal.yml │ │ │ └── repo.yml │ │ └── templates │ │ ├── capacity-scheduler.xml.j2 │ │ ├── container-executor.cfg.j2 │ │ ├── core-site.xml.j2 │ │ ├── hadoop-env.sh.j2 │ │ ├── hadoop-metrics2.properties.j2 │ │ ├── hadoop-metrics.properties.j2 │ │ ├── hdfs-site.xml.j2 │ │ ├── hdp.repo.j2 │ │ ├── hosts.exclude.j2 │ │ ├── hosts.list.j2 │ │ ├── log4j.properties.j2 │ │ ├── mapred-env.sh.j2 │ │ ├── mapred-site.xml.j2 │ │ ├── merge-keytabs.ktutil.j2 │ │ ├── ssl-client.xml.j2 │ │ ├── ssl-server.xml.j2 │ │ ├── yarn-env.sh.j2 │ │ ├── yarn-site.xml.j2 │ │ └── zk-acl.txt.j2 │ ├── cgroups │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── conf.yml │ │ │ ├── install.yml │ │ │ ├── main.yml │ │ │ └── resource.yml │ │ └── templates │ │ ├── cgconfig.conf.j2 │ │ └── cgroups.sh.j2 │ ├── client │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ └── tasks │ │ ├── install.yml │ │ └── main.yml │ ├── collect │ │ ├── defaults │ │ │ └── main.yml │ │ ├── handlers │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── README.md │ │ ├── tasks │ │ │ └── main.yml │ │ └── vars │ │ └── main.yml │ ├── datanode_server_deletedata │ │ └── tasks │ │ ├── delete.yml │ │ └── main.yml │ ├── f500.dumpall │ │ ├── COPYING │ │ ├── COPYING.LESSER │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── README.md │ │ ├── tasks │ │ │ └── main.yml │ │ └── templates │ │ └── dumpall.j2 │ ├── hbase_master │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── config.yml │ │ │ ├── install.yml │ │ │ ├── main.yml │ │ │ └── principal.yml │ │ └── templates │ │ ├── hadoop-metrics2-hbase.properties.j2 │ │ ├── hbase-env.sh.j2 │ │ ├── hbase-master.j2 │ │ ├── hbase-policy.xml.j2 │ │ ├── hbase-service-test.rb.j2 │ │ ├── hbase-site.xml.j2 │ │ ├── log4j.properties.j2 │ │ ├── regionservers.j2 │ │ └── zk-jaas.conf.j2 │ ├── hbase_regionserver │ │ ├── defaults │ │ │ └── main.yml │ │ ├── files │ │ │ └── graceful_stop.sh │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── config.yml │ │ │ ├── install.yml │ │ │ ├── main.yml │ │ │ └── principal.yml │ │ └── templates │ │ ├── hadoop-metrics2-hbase.properties.j2 │ │ ├── hbase-env.sh.j2 │ │ ├── hbase-policy.xml.j2 │ │ ├── hbase-regionserver.j2 │ │ ├── hbase-site.xml.j2 │ │ ├── log4j.properties.j2 │ │ ├── regionservers.j2 │ │ └── zk-jaas.conf.j2 │ ├── hcatalog │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── config.yml │ │ │ ├── install.yml │ │ │ └── main.yml │ │ └── templates │ │ └── hcat-env.sh.j2 │ ├── hdfs_base │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ └── tasks │ │ ├── config.yml │ │ └── main.yml │ ├── hdfs_spark │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ └── tasks │ │ ├── config.yml │ │ └── main.yml │ ├── hdfs_tez │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ └── tasks │ │ ├── config.yml │ │ └── main.yml │ ├── hdfs_yarn │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ └── tasks │ │ ├── config.yml │ │ └── main.yml │ ├── hive │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── config.yml │ │ │ ├── install.yml │ │ │ ├── main.yml │ │ │ └── principal.yml │ │ └── templates │ │ ├── hive-exec-log4j.properties.j2 │ │ ├── hive-log4j.properties.j2 │ │ └── hive-site.xml.j2 │ ├── httpfs │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── config.yml │ │ │ ├── install.yml │ │ │ └── main.yml │ │ └── templates │ │ ├── hadoop-httpfs-default.j2 │ │ ├── hadoop-httpfs.j2 │ │ ├── httpfs-env.sh.j2 │ │ ├── httpfs-log4j.properties.j2 │ │ ├── httpfs.sh.j2 │ │ ├── httpfs-signature.secret.j2 │ │ └── httpfs-site.xml.j2 │ ├── hue │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── config.yml │ │ │ ├── install.yml │ │ │ └── main.yml │ │ └── templates │ │ ├── hue_httpd.conf.j2 │ │ ├── hue.ini.j2 │ │ └── log.conf.j2 │ ├── java7 │ │ ├── defaults │ │ │ └── main.yml │ │ ├── files │ │ │ ├── env_keep_javahome │ │ │ └── java.sh │ │ ├── meta │ │ │ └── main.yml │ │ └── tasks │ │ ├── config.yml │ │ ├── install.yml │ │ └── main.yml │ ├── java8 │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ └── tasks │ │ ├── install.yml │ │ └── main.yml │ ├── journalnode │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── config.yml │ │ │ ├── install.yml │ │ │ └── main.yml │ │ └── templates │ │ └── default_hadoop-hdfs-journalnode.j2 │ ├── journalnode_server_createdir │ │ └── tasks │ │ ├── conf.yml │ │ └── main.yml │ ├── journalnode_server_deletedata │ │ └── tasks │ │ ├── delete.yml │ │ └── main.yml │ ├── mapreduce_history │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── config.yml │ │ │ ├── install.yml │ │ │ └── main.yml │ │ └── templates │ │ └── default_hadoop-mapreduce-historyserver.j2 │ ├── namenode │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── config.yml │ │ │ ├── install.yml │ │ │ └── main.yml │ │ └── templates │ │ ├── default_hadoop-hdfs-namenode.j2 │ │ ├── default_hadoop-hdfs-zkfc.j2 │ │ ├── hdfs-balancer.sh.j2 │ │ └── jaas-hdfs.conf.j2 │ ├── namenode_bootstrapstandby │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ └── tasks │ │ ├── config.yml │ │ └── main.yml │ ├── namenode_format │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ └── tasks │ │ ├── config.yml │ │ └── main.yml │ ├── os │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ └── tasks │ │ ├── kernel.yml │ │ ├── limits.yml │ │ ├── main.yml │ │ └── thp.yml │ ├── pig │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── config.yml │ │ │ ├── install.yml │ │ │ └── main.yml │ │ └── templates │ │ ├── log4j.properties.j2 │ │ └── pig.properties.j2 │ ├── presto_client │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── install.yml │ │ │ └── main.yml │ │ └── templates │ │ ├── config.properties.j2 │ │ ├── hive.properties.j2 │ │ ├── jvm.config.j2 │ │ ├── launcher.j2 │ │ ├── log.properties.j2 │ │ └── node.properties.j2 │ ├── presto_coordinator │ │ ├── defaults │ │ │ └── main.yml │ │ ├── files │ │ │ └── env_keep_prestohome │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── catalog.yml │ │ │ ├── config.yml │ │ │ ├── install.yml │ │ │ └── main.yml │ │ └── templates │ │ ├── config.properties.j2 │ │ ├── hive.properties.j2 │ │ ├── jvm.config.j2 │ │ ├── launcher.j2 │ │ ├── log.properties.j2 │ │ ├── node.properties.j2 │ │ └── presto.sh.j2 │ ├── prestogres │ ├── presto_user │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ └── tasks │ │ ├── main.yml │ │ └── user.yml │ ├── presto_worker │ │ ├── defaults │ │ │ └── main.yml │ │ ├── files │ │ │ └── env_keep_prestohome │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── catalog.yml │ │ │ ├── config.yml │ │ │ ├── install.yml │ │ │ └── main.yml │ │ └── templates │ │ ├── config.properties.j2 │ │ ├── hive.properties.j2 │ │ ├── jvm.config.j2 │ │ ├── launcher.j2 │ │ ├── log.properties.j2 │ │ ├── node.properties.j2 │ │ └── presto.sh.j2 │ ├── resourcemanager │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── config.yml │ │ │ ├── install.yml │ │ │ └── main.yml │ │ └── templates │ │ └── default_hadoop-yarn-resourcemanager.j2 │ ├── site-defaults │ │ └── defaults │ │ └── main.yml │ ├── slavenode │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── config.yml │ │ │ ├── install.yml │ │ │ └── main.yml │ │ └── templates │ │ ├── default_hadoop-hdfs-datanode.j2 │ │ └── default_hadoop-yarn-nodemanager.j2 │ ├── spark │ │ ├── defaults │ │ │ └── main.yml │ │ ├── files │ │ │ └── env_keep_sparkhome │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── config.yml │ │ │ ├── install-tarball.yml │ │ │ ├── install.yml │ │ │ ├── main.yml │ │ │ └── principal.yml │ │ └── templates │ │ ├── fairscheduler.xml.j2 │ │ ├── log4j.properties.j2 │ │ ├── metrics.properties.j2 │ │ ├── spark-defaults.conf.j2 │ │ ├── spark-env.sh.j2 │ │ └── spark.sh.j2 │ ├── spark_history │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ └── tasks │ │ ├── config.yml │ │ └── main.yml │ ├── spark_user │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ └── tasks │ │ ├── main.yml │ │ └── user.yml │ ├── storm │ │ ├── defaults │ │ │ └── main.yml │ │ ├── files │ │ │ ├── storm-drpc │ │ │ ├── storm-nimbus │ │ │ ├── storm.py │ │ │ ├── storm-supervisor │ │ │ └── storm-ui │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── config.yml │ │ │ ├── install.yml │ │ │ ├── main.yml │ │ │ └── user.yml │ │ └── templates │ │ ├── storm_env.ini.j2 │ │ ├── storm-env.sh.j2 │ │ ├── storm-slider-env.sh.j2 │ │ └── storm.yaml.j2 │ ├── tez │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── config.yml │ │ │ ├── install.yml │ │ │ └── main.yml │ │ └── templates │ │ └── tez-site.xml.j2 │ ├── timelineservice │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── config.yml │ │ │ ├── install.yml │ │ │ └── main.yml │ │ └── templates │ │ └── default_hadoop-yarn-timelineserver.j2 │ ├── zookeeper_server │ │ ├── defaults │ │ │ └── main.yml │ │ ├── meta │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── config.yml │ │ │ ├── install.yml │ │ │ ├── main.yml │ │ │ └── principal.yml │ │ └── templates │ │ ├── jaas.conf.j2 │ │ ├── log4j.properties.j2 │ │ ├── myid.j2 │ │ ├── zoo.cfg.j2 │ │ ├── zookeeper-env.sh.j2 │ │ └── zookeeper-server.j2 │ └── zookeeper_server_deletedata │ └── tasks │ ├── delete.yml │ └── main.yml ├── start_datanode.yml ├── start_hbase_master.yml ├── start_hbase_regionserver.yml ├── start_hcatalog.yml ├── start_httpfs.yml ├── start_hue.yml ├── start_journalnode.yml ├── start_mapreduce_historyserver.yml ├── start_namenode.retry ├── start_namenode.yml ├── start_nodemanager.yml ├── start_resourcemanager.yml ├── start_spark_historyserver.yml ├── start_timelineservice.yml ├── start_zookeeper-server.yml ├── stop_datanode.yml ├── stop_hbase_master.yml ├── stop_hbase_regionserver.yml ├── stop_hcatalog.yml ├── stop_journalnode.yml ├── stop_mapreduce_historyserver.yml ├── stop_namenode.yml ├── stop_nodemanager.yml ├── stop_resourcemanager.yml ├── stop_spark_historyserver.yml ├── stop_timelineservice.yml ├── stop_zookeeper-server.yml ├── sync_kdc.yml └── upgrade_namenode.yml 194 directories, 404 files
playbook_dir = os.path.join(work_dir, 'hadoop/playbooks')
!ls -la {playbook_dir} | head
total 244 drwxr-xr-x 4 root root 4096 Sep 2 19:20 . drwxr-xr-x 4 root root 4096 Sep 2 19:20 .. -rw-r--r-- 1 root root 13 Sep 2 19:20 conf_base.retry -rw-r--r-- 1 root root 39 Sep 2 19:20 conf_base.yml -rw-r--r-- 1 root root 136 Sep 2 19:20 conf_hdfs_base.yml -rw-r--r-- 1 root root 137 Sep 2 19:20 conf_hdfs_spark.yml -rw-r--r-- 1 root root 135 Sep 2 19:20 conf_hdfs_tez.yml -rw-r--r-- 1 root root 136 Sep 2 19:20 conf_hdfs_yarn.yml -rw-r--r-- 1 root root 188 Sep 2 19:20 conf_namenode_bootstrapstandby.yml
!ansible-playbook -CDv {playbook_dir}/install_slavenode.yml -l { host_new_machine }
Using /etc/ansible/ansible.cfg as config file PLAY [hadoop_slavenode] ******************************************************** TASK [setup] ******************************************************************* ok: [XXX.XXX.XXX.233] TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/repo.yml for XXX.XXX.XXX.233 TASK [base : install_hdp_repo] ************************************************* ok: [XXX.XXX.XXX.233] => {"changed": false, "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/etc/yum.repos.d/hdp.repo", "size": 556, "state": "file", "uid": 0} TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/conf.yml for XXX.XXX.XXX.233 TASK [base : create_hadoop_conf_dir] ******************************************* ok: [XXX.XXX.XXX.233] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/hadoop/conf", "size": 4096, "state": "directory", "uid": 0} TASK [base : copy_conf_files] ************************************************** ok: [XXX.XXX.XXX.233] => (item=core-site.xml) => {"changed": false, "gid": 0, "group": "root", "item": "core-site.xml", "mode": "0644", "owner": "root", "path": "/etc/hadoop/conf/core-site.xml", "size": 2319, "state": "file", "uid": 0} ok: [XXX.XXX.XXX.233] => (item=hdfs-site.xml) => {"changed": false, "gid": 0, "group": "root", "item": "hdfs-site.xml", "mode": "0644", "owner": "root", "path": "/etc/hadoop/conf/hdfs-site.xml", "size": 5989, "state": "file", "uid": 0} ok: [XXX.XXX.XXX.233] => (item=yarn-site.xml) => {"changed": false, "gid": 0, "group": "root", "item": "yarn-site.xml", "mode": "0644", "owner": "root", "path": "/etc/hadoop/conf/yarn-site.xml", "size": 6653, "state": "file", "uid": 0} ok: [XXX.XXX.XXX.233] => (item=mapred-site.xml) => {"changed": false, "gid": 0, "group": "root", "item": "mapred-site.xml", "mode": "0644", "owner": "root", "path": "/etc/hadoop/conf/mapred-site.xml", "size": 2287, "state": "file", "uid": 0} ok: [XXX.XXX.XXX.233] => (item=hadoop-env.sh) => {"changed": false, "gid": 0, "group": "root", "item": "hadoop-env.sh", "mode": "0644", "owner": "root", "path": "/etc/hadoop/conf/hadoop-env.sh", "size": 4623, "state": "file", "uid": 0} ok: [XXX.XXX.XXX.233] => (item=yarn-env.sh) => {"changed": false, "gid": 0, "group": "root", "item": "yarn-env.sh", "mode": "0644", "owner": "root", "path": "/etc/hadoop/conf/yarn-env.sh", "size": 4567, "state": "file", "uid": 0} ok: [XXX.XXX.XXX.233] => (item=mapred-env.sh) => {"changed": false, "gid": 0, "group": "root", "item": "mapred-env.sh", "mode": "0644", "owner": "root", "path": "/etc/hadoop/conf/mapred-env.sh", "size": 1639, "state": "file", "uid": 0} ok: [XXX.XXX.XXX.233] => (item=hadoop-metrics.properties) => {"changed": false, "gid": 0, "group": "root", "item": "hadoop-metrics.properties", "mode": "0644", "owner": "root", "path": "/etc/hadoop/conf/hadoop-metrics.properties", "size": 2490, "state": "file", "uid": 0} ok: [XXX.XXX.XXX.233] => (item=hadoop-metrics2.properties) => {"changed": false, "gid": 0, "group": "root", "item": "hadoop-metrics2.properties", "mode": "0644", "owner": "root", "path": "/etc/hadoop/conf/hadoop-metrics2.properties", "size": 2425, "state": "file", "uid": 0} ok: [XXX.XXX.XXX.233] => (item=log4j.properties) => {"changed": false, "gid": 0, "group": "root", "item": "log4j.properties", "mode": "0644", "owner": "root", "path": "/etc/hadoop/conf/log4j.properties", "size": 11291, "state": "file", "uid": 0} ok: [XXX.XXX.XXX.233] => (item=capacity-scheduler.xml) => {"changed": false, "gid": 0, "group": "root", "item": "capacity-scheduler.xml", "mode": "0644", "owner": "root", "path": "/etc/hadoop/conf/capacity-scheduler.xml", "size": 4436, "state": "file", "uid": 0} ok: [XXX.XXX.XXX.233] => (item=hosts.exclude) => {"changed": false, "gid": 0, "group": "root", "item": "hosts.exclude", "mode": "0644", "owner": "root", "path": "/etc/hadoop/conf/hosts.exclude", "size": 2, "state": "file", "uid": 0} ok: [XXX.XXX.XXX.233] => (item=hosts.list) => {"changed": false, "gid": 0, "group": "root", "item": "hosts.list", "mode": "0644", "owner": "root", "path": "/etc/hadoop/conf/hosts.list", "size": 99, "state": "file", "uid": 0} TASK [base : copy_secure_conf_files] ******************************************* ok: [XXX.XXX.XXX.233] => (item=ssl-server.xml) => {"changed": false, "gid": 0, "group": "root", "item": "ssl-server.xml", "mode": "0644", "owner": "root", "path": "/etc/hadoop/conf/ssl-server.xml", "size": 672, "state": "file", "uid": 0} ok: [XXX.XXX.XXX.233] => (item=ssl-client.xml) => {"changed": false, "gid": 0, "group": "root", "item": "ssl-client.xml", "mode": "0644", "owner": "root", "path": "/etc/hadoop/conf/ssl-client.xml", "size": 344, "state": "file", "uid": 0} ok: [XXX.XXX.XXX.233] => (item=zk-acl.txt) => {"changed": false, "gid": 0, "group": "root", "item": "zk-acl.txt", "mode": "0644", "owner": "root", "path": "/etc/hadoop/conf/zk-acl.txt", "size": 15, "state": "file", "uid": 0} ok: [XXX.XXX.XXX.233] => (item=container-executor.cfg) => {"changed": false, "gid": 0, "group": "root", "item": "container-executor.cfg", "mode": "0644", "owner": "root", "path": "/etc/hadoop/conf/container-executor.cfg", "size": 225, "state": "file", "uid": 0} TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/kerberos.yml for XXX.XXX.XXX.233 TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/principal.yml for XXX.XXX.XXX.233 TASK [base : Check principal] ************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "msg": "remote module does not support check mode", "skipped": true} TASK [base : Add principal] **************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [base : Check keytab] ***************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "msg": "remote module does not support check mode", "skipped": true} TASK [base : Prepare keytab] *************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [base : Modify permissions of keytab] ************************************* ok: [XXX.XXX.XXX.233] => {"changed": false, "gid": 494, "group": "hadoop", "mode": "0400", "owner": "hdfs", "path": "/etc/hadoop/conf/hdfs-unmerged.keytab", "size": 394, "state": "file", "uid": 492} TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/principal.yml for XXX.XXX.XXX.233 TASK [base : Check principal] ************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "msg": "remote module does not support check mode", "skipped": true} TASK [base : Add principal] **************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [base : Check keytab] ***************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "msg": "remote module does not support check mode", "skipped": true} TASK [base : Prepare keytab] *************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [base : Modify permissions of keytab] ************************************* ok: [XXX.XXX.XXX.233] => {"changed": false, "gid": 494, "group": "hadoop", "mode": "0400", "owner": "mapred", "path": "/etc/hadoop/conf/mapred-unmerged.keytab", "size": 404, "state": "file", "uid": 493} TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/principal.yml for XXX.XXX.XXX.233 TASK [base : Check principal] ************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "msg": "remote module does not support check mode", "skipped": true} TASK [base : Add principal] **************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [base : Check keytab] ***************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "msg": "remote module does not support check mode", "skipped": true} TASK [base : Prepare keytab] *************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [base : Modify permissions of keytab] ************************************* ok: [XXX.XXX.XXX.233] => {"changed": false, "gid": 494, "group": "hadoop", "mode": "0400", "owner": "yarn", "path": "/etc/hadoop/conf/yarn-unmerged.keytab", "size": 394, "state": "file", "uid": 494} TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/principal.yml for XXX.XXX.XXX.233 TASK [base : Check principal] ************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "msg": "remote module does not support check mode", "skipped": true} TASK [base : Add principal] **************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [base : Check keytab] ***************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "msg": "remote module does not support check mode", "skipped": true} TASK [base : Prepare keytab] *************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [base : Modify permissions of keytab] ************************************* ok: [XXX.XXX.XXX.233] => {"changed": false, "gid": 494, "group": "hadoop", "mode": "0400", "owner": "hdfs", "path": "/etc/hadoop/conf/http.keytab", "size": 394, "state": "file", "uid": 492} TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/keytab.yml for XXX.XXX.XXX.233 TASK [base : check keytab] ***************************************************** ok: [XXX.XXX.XXX.233] => {"changed": false, "stat": {"atime": 1470633429.3369722, "checksum": "48b45130e5b2f4dfd1c6b307a593708e637f4252", "ctime": 1453095828.7535946, "dev": 2050, "exists": true, "gid": 494, "gr_name": "hadoop", "inode": 148503, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "md5": "15bc4a0eb0e7e967dda8e346ab30a307", "mode": "0400", "mtime": 1453095827.5435684, "nlink": 1, "path": "/etc/hadoop/conf/hdfs.keytab", "pw_name": "hdfs", "rgrp": false, "roth": false, "rusr": true, "size": 786, "uid": 492, "wgrp": false, "woth": false, "wusr": false, "xgrp": false, "xoth": false, "xusr": false}} TASK [base : prepare_script] *************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [base : run_script] ******************************************************* skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [base : Modify permissions of keytab] ************************************* ok: [XXX.XXX.XXX.233] => {"changed": false, "gid": 494, "group": "hadoop", "mode": "0400", "owner": "hdfs", "path": "/etc/hadoop/conf/hdfs.keytab", "size": 786, "state": "file", "uid": 492} TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/keytab.yml for XXX.XXX.XXX.233 TASK [base : check keytab] ***************************************************** ok: [XXX.XXX.XXX.233] => {"changed": false, "stat": {"atime": 1470633436.1891208, "checksum": "d58ef68c5f9eb660eb1b325fce8a7102b132498f", "ctime": 1453095834.5307198, "dev": 2050, "exists": true, "gid": 494, "gr_name": "hadoop", "inode": 148504, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "md5": "0698e0628a38db48b404c7952337946e", "mode": "0400", "mtime": 1453095833.2656922, "nlink": 1, "path": "/etc/hadoop/conf/mapred.keytab", "pw_name": "mapred", "rgrp": false, "roth": false, "rusr": true, "size": 796, "uid": 493, "wgrp": false, "woth": false, "wusr": false, "xgrp": false, "xoth": false, "xusr": false}} TASK [base : prepare_script] *************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [base : run_script] ******************************************************* skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [base : Modify permissions of keytab] ************************************* ok: [XXX.XXX.XXX.233] => {"changed": false, "gid": 494, "group": "hadoop", "mode": "0400", "owner": "mapred", "path": "/etc/hadoop/conf/mapred.keytab", "size": 796, "state": "file", "uid": 493} TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/keytab.yml for XXX.XXX.XXX.233 TASK [base : check keytab] ***************************************************** ok: [XXX.XXX.XXX.233] => {"changed": false, "stat": {"atime": 1470633443.2042727, "checksum": "cea51b497249c0ed850ce31660b077517e418ad5", "ctime": 1453095840.1158407, "dev": 2050, "exists": true, "gid": 494, "gr_name": "hadoop", "inode": 148505, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "md5": "6edf62a839dcafb95966b9498ef69f79", "mode": "0400", "mtime": 1453095838.8728137, "nlink": 1, "path": "/etc/hadoop/conf/yarn.keytab", "pw_name": "yarn", "rgrp": false, "roth": false, "rusr": true, "size": 786, "uid": 494, "wgrp": false, "woth": false, "wusr": false, "xgrp": false, "xoth": false, "xusr": false}} TASK [base : prepare_script] *************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [base : run_script] ******************************************************* skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [base : Modify permissions of keytab] ************************************* ok: [XXX.XXX.XXX.233] => {"changed": false, "gid": 494, "group": "hadoop", "mode": "0400", "owner": "yarn", "path": "/etc/hadoop/conf/yarn.keytab", "size": 786, "state": "file", "uid": 494} TASK [java7 : include] ********************************************************* included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/java7/tasks/install.yml for XXX.XXX.XXX.233 TASK [java7 : check_jdk7_installed] ******************************************** ok: [XXX.XXX.XXX.233] => {"changed": false, "cmd": "rpm -qa|grep jdk-1.7.0_75-fcs.x86_64", "delta": "0:00:02.603238", "end": "2016-09-02 19:20:45.598545", "failed": false, "failed_when_result": false, "rc": 0, "start": "2016-09-02 19:20:42.995307", "stderr": "", "stdout": "jdk-1.7.0_75-fcs.x86_64", "stdout_lines": ["jdk-1.7.0_75-fcs.x86_64"], "warnings": ["Consider using yum, dnf or zypper module rather than running rpm"]} [WARNING]: Consider using yum, dnf or zypper module rather than running rpm TASK [java7 : download_oraclejdk7_by_wget] ************************************* skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [java7 : md5sum_rpm] ****************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [java7 : check_md5sum] **************************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [java7 : install_oraclejdk] *********************************************** skipping: [XXX.XXX.XXX.233] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [java7 : include] ********************************************************* included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/java7/tasks/config.yml for XXX.XXX.XXX.233 TASK [java7 : copy_bash_profile] *********************************************** ok: [XXX.XXX.XXX.233] => {"changed": false, "checksum": "b60f0995dba4c2f287340a26aad343ebc97335ea", "dest": "/etc/profile.d/java.sh", "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/etc/profile.d/java.sh", "size": 72, "state": "file", "uid": 0} TASK [java7 : copy_sudoers_conf_of_JAVA_HOME] ********************************** ok: [XXX.XXX.XXX.233] => {"changed": false, "checksum": "e8858ea3e690a93d46a8b3f38e4d2da4f7cc8c2a", "dest": "/etc/sudoers.d/env_keep_javahome", "gid": 0, "group": "root", "mode": "0440", "owner": "root", "path": "/etc/sudoers.d/env_keep_javahome", "size": 35, "state": "file", "uid": 0} TASK [slavenode : include] ***************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/slavenode/tasks/install.yml for XXX.XXX.XXX.233 TASK [slavenode : install_slavenode_packages] ********************************** ok: [XXX.XXX.XXX.233] => (item=[u'hadoop-hdfs-datanode', u'hadoop-yarn-nodemanager', u'hadoop-mapreduce']) => {"changed": false, "item": ["hadoop-hdfs-datanode", "hadoop-yarn-nodemanager", "hadoop-mapreduce"], "msg": "", "rc": 0, "results": ["hadoop-hdfs-datanode-XXX.XXX.XXX.2.3.4.0-3485.el6.noarch providing hadoop-hdfs-datanode is already installed", "hadoop-yarn-nodemanager-XXX.XXX.XXX.2.3.4.0-3485.el6.noarch providing hadoop-yarn-nodemanager is already installed", "hadoop-mapreduce-XXX.XXX.XXX.2.3.4.0-3485.el6.noarch providing hadoop-mapreduce is already installed"]} TASK [slavenode : include] ***************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/slavenode/tasks/config.yml for XXX.XXX.XXX.233 TASK [slavenode : create_datanode_data_dir] ************************************ [DEPRECATION WARNING]: Using bare variables is deprecated. Update your playbooks so that the environment value uses the full variable syntax ('{{dfs_datanode_data_dirs}}'). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. changed: [XXX.XXX.XXX.233] => (item=/hadoop/data01/dfs/datadir) => {"changed": true, "item": "/hadoop/data01/dfs/datadir"} --- before +++ after @@ -1,4 +1,4 @@ { "path": "/hadoop/data01/dfs/datadir", - "state": "absent" + "state": "directory" } changed: [XXX.XXX.XXX.233] => (item=/hadoop/data02/dfs/datadir) => {"changed": true, "item": "/hadoop/data02/dfs/datadir"} --- before +++ after @@ -1,4 +1,4 @@ { "path": "/hadoop/data02/dfs/datadir", - "state": "absent" + "state": "directory" } ok: [XXX.XXX.XXX.233] => (item=/hadoop/data03/dfs/datadir) => {"changed": false, "gid": 494, "group": "hadoop", "item": "/hadoop/data03/dfs/datadir", "mode": "0700", "owner": "hdfs", "path": "/hadoop/data03/dfs/datadir", "size": 4096, "state": "directory", "uid": 492} changed: [XXX.XXX.XXX.233] => (item=/hadoop/data04/dfs/datadir) => {"changed": true, "item": "/hadoop/data04/dfs/datadir"} --- before +++ after @@ -1,4 +1,4 @@ { "path": "/hadoop/data04/dfs/datadir", - "state": "absent" + "state": "directory" } changed: [XXX.XXX.XXX.233] => (item=/hadoop/data05/dfs/datadir) => {"changed": true, "item": "/hadoop/data05/dfs/datadir"} --- before +++ after @@ -1,4 +1,4 @@ { "path": "/hadoop/data05/dfs/datadir", - "state": "absent" + "state": "directory" } changed: [XXX.XXX.XXX.233] => (item=/hadoop/data06/dfs/datadir) => {"changed": true, "item": "/hadoop/data06/dfs/datadir"} --- before +++ after @@ -1,4 +1,4 @@ { "path": "/hadoop/data06/dfs/datadir", - "state": "absent" + "state": "directory" } changed: [XXX.XXX.XXX.233] => (item=/hadoop/data07/dfs/datadir) => {"changed": true, "item": "/hadoop/data07/dfs/datadir"} --- before +++ after @@ -1,4 +1,4 @@ { "path": "/hadoop/data07/dfs/datadir", - "state": "absent" + "state": "directory" } changed: [XXX.XXX.XXX.233] => (item=/hadoop/data08/dfs/datadir) => {"changed": true, "item": "/hadoop/data08/dfs/datadir"} --- before +++ after @@ -1,4 +1,4 @@ { "path": "/hadoop/data08/dfs/datadir", - "state": "absent" + "state": "directory" } changed: [XXX.XXX.XXX.233] => (item=/hadoop/data09/dfs/datadir) => {"changed": true, "item": "/hadoop/data09/dfs/datadir"} --- before +++ after @@ -1,4 +1,4 @@ { "path": "/hadoop/data09/dfs/datadir", - "state": "absent" + "state": "directory" } changed: [XXX.XXX.XXX.233] => (item=/hadoop/data10/dfs/datadir) => {"changed": true, "item": "/hadoop/data10/dfs/datadir"} --- before +++ after @@ -1,4 +1,4 @@ { "path": "/hadoop/data10/dfs/datadir", - "state": "absent" + "state": "directory" } TASK [slavenode : fix_init_scripts] ******************************************** changed: [XXX.XXX.XXX.233] => (item={u'path': u'hadoop-hdfs', u'name': u'hadoop-hdfs-datanode'}) => {"backup": "", "changed": true, "item": {"name": "hadoop-hdfs-datanode", "path": "hadoop-hdfs"}, "msg": "line added"} --- before: /usr/hdp/XXX.XXX.XXX.0-3485/hadoop-hdfs/etc/rc.d/init.d/hadoop-hdfs-datanode (content) +++ after: /usr/hdp/XXX.XXX.XXX.0-3485/hadoop-hdfs/etc/rc.d/init.d/hadoop-hdfs-datanode (content) @@ -32,6 +32,7 @@ ### END INIT INFO . /lib/lsb/init-functions +. /etc/default/hadoop-hdfs-datanode BIGTOP_DEFAULTS_DIR=${BIGTOP_DEFAULTS_DIR-/etc/default} [ -n "${BIGTOP_DEFAULTS_DIR}" -a -r ${BIGTOP_DEFAULTS_DIR}/hadoop ] && . ${BIGTOP_DEFAULTS_DIR}/hadoop changed: [XXX.XXX.XXX.233] => (item={u'path': u'hadoop-yarn', u'name': u'hadoop-yarn-nodemanager'}) => {"backup": "", "changed": true, "item": {"name": "hadoop-yarn-nodemanager", "path": "hadoop-yarn"}, "msg": "line added"} --- before: /usr/hdp/XXX.XXX.XXX.0-3485/hadoop-yarn/etc/rc.d/init.d/hadoop-yarn-nodemanager (content) +++ after: /usr/hdp/XXX.XXX.XXX.0-3485/hadoop-yarn/etc/rc.d/init.d/hadoop-yarn-nodemanager (content) @@ -32,6 +32,7 @@ ### END INIT INFO . /lib/lsb/init-functions +. /etc/default/hadoop-yarn-nodemanager BIGTOP_DEFAULTS_DIR=${BIGTOP_DEFAULTS_DIR-/etc/default} [ -n "${BIGTOP_DEFAULTS_DIR}" -a -r ${BIGTOP_DEFAULTS_DIR}/hadoop ] && . ${BIGTOP_DEFAULTS_DIR}/hadoop TASK [slavenode : create_symbolic_link_to/etc/init.d] ************************** ok: [XXX.XXX.XXX.233] => (item={u'path': u'hadoop-hdfs', u'name': u'hadoop-hdfs-datanode'}) => {"changed": false, "dest": "/etc/init.d/hadoop-hdfs-datanode", "gid": 0, "group": "root", "item": {"name": "hadoop-hdfs-datanode", "path": "hadoop-hdfs"}, "mode": "0777", "owner": "root", "size": 70, "src": "/usr/hdp/XXX.XXX.XXX.0-3485/hadoop-hdfs/etc/rc.d/init.d/hadoop-hdfs-datanode", "state": "link", "uid": 0} ok: [XXX.XXX.XXX.233] => (item={u'path': u'hadoop-yarn', u'name': u'hadoop-yarn-nodemanager'}) => {"changed": false, "dest": "/etc/init.d/hadoop-yarn-nodemanager", "gid": 0, "group": "root", "item": {"name": "hadoop-yarn-nodemanager", "path": "hadoop-yarn"}, "mode": "0777", "owner": "root", "size": 73, "src": "/usr/hdp/XXX.XXX.XXX.0-3485/hadoop-yarn/etc/rc.d/init.d/hadoop-yarn-nodemanager", "state": "link", "uid": 0} TASK [slavenode : create_holder_directory_for_hadoop_tmp_dir] ****************** ok: [XXX.XXX.XXX.233] => {"changed": false, "gid": 493, "group": "yarn", "mode": "0755", "owner": "yarn", "path": "/hadoop/tmp", "size": 4096, "state": "directory", "uid": 494} TASK [slavenode : create_yarn_pid_dir] ***************************************** ok: [XXX.XXX.XXX.233] => {"changed": false, "gid": 493, "group": "yarn", "mode": "0755", "owner": "yarn", "path": "/var/run/hadoop-yarn", "size": 4096, "state": "directory", "uid": 494} TASK [slavenode : create_hdfs_log_dir] ***************************************** changed: [XXX.XXX.XXX.233] => {"changed": true, "gid": 491, "group": "hdfs", "mode": "0755", "owner": "hdfs", "path": "/var/log/hadoop-hdfs", "size": 4096, "state": "directory", "uid": 492} --- before +++ after @@ -1,4 +1,4 @@ { - "group": 491, + "group": 494, "path": "/var/log/hadoop-hdfs" } TASK [slavenode : create_yarn_log_dir] ***************************************** changed: [XXX.XXX.XXX.233] => {"changed": true, "gid": 493, "group": "yarn", "mode": "0755", "owner": "yarn", "path": "/var/log/hadoop-yarn", "size": 4096, "state": "directory", "uid": 494} --- before +++ after @@ -1,4 +1,4 @@ { - "group": 493, + "group": 494, "path": "/var/log/hadoop-yarn" } TASK [slavenode : copy_defaults_file] ****************************************** ok: [XXX.XXX.XXX.233] => (item=hadoop-hdfs-datanode) => {"changed": false, "gid": 0, "group": "root", "item": "hadoop-hdfs-datanode", "mode": "0755", "owner": "root", "path": "/etc/default/hadoop-hdfs-datanode", "size": 1246, "state": "file", "uid": 0} ok: [XXX.XXX.XXX.233] => (item=hadoop-yarn-nodemanager) => {"changed": false, "gid": 0, "group": "root", "item": "hadoop-yarn-nodemanager", "mode": "0755", "owner": "root", "path": "/etc/default/hadoop-yarn-nodemanager", "size": 1000, "state": "file", "uid": 0} TASK [slavenode : ensure_version_link] ***************************************** changed: [XXX.XXX.XXX.233] => {"changed": true, "dest": "/usr/hdp/current/hadoop-yarn", "src": "/usr/hdp/XXX.XXX.XXX.0-3485/hadoop-yarn", "state": "absent"} --- before +++ after @@ -1,4 +1,4 @@ { "path": "/usr/hdp/current/hadoop-yarn", - "state": "absent" + "state": "link" } PLAY RECAP ********************************************************************* XXX.XXX.XXX.233 : ok=42 changed=5 unreachable=0 failed=0
!ansible-playbook {playbook_dir}/install_slavenode.yml -l { host_new_machine }
PLAY [hadoop_slavenode] ******************************************************** TASK [setup] ******************************************************************* ok: [XXX.XXX.XXX.233] TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/repo.yml for XXX.XXX.XXX.233 TASK [base : install_hdp_repo] ************************************************* ok: [XXX.XXX.XXX.233] TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/conf.yml for XXX.XXX.XXX.233 TASK [base : create_hadoop_conf_dir] ******************************************* ok: [XXX.XXX.XXX.233] TASK [base : copy_conf_files] ************************************************** ok: [XXX.XXX.XXX.233] => (item=core-site.xml) ok: [XXX.XXX.XXX.233] => (item=hdfs-site.xml) ok: [XXX.XXX.XXX.233] => (item=yarn-site.xml) ok: [XXX.XXX.XXX.233] => (item=mapred-site.xml) ok: [XXX.XXX.XXX.233] => (item=hadoop-env.sh) ok: [XXX.XXX.XXX.233] => (item=yarn-env.sh) ok: [XXX.XXX.XXX.233] => (item=mapred-env.sh) ok: [XXX.XXX.XXX.233] => (item=hadoop-metrics.properties) ok: [XXX.XXX.XXX.233] => (item=hadoop-metrics2.properties) ok: [XXX.XXX.XXX.233] => (item=log4j.properties) ok: [XXX.XXX.XXX.233] => (item=capacity-scheduler.xml) ok: [XXX.XXX.XXX.233] => (item=hosts.exclude) ok: [XXX.XXX.XXX.233] => (item=hosts.list) TASK [base : copy_secure_conf_files] ******************************************* ok: [XXX.XXX.XXX.233] => (item=ssl-server.xml) ok: [XXX.XXX.XXX.233] => (item=ssl-client.xml) ok: [XXX.XXX.XXX.233] => (item=zk-acl.txt) ok: [XXX.XXX.XXX.233] => (item=container-executor.cfg) TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/kerberos.yml for XXX.XXX.XXX.233 TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/principal.yml for XXX.XXX.XXX.233 TASK [base : Check principal] ************************************************** ok: [XXX.XXX.XXX.233] TASK [base : Add principal] **************************************************** skipping: [XXX.XXX.XXX.233] TASK [base : Check keytab] ***************************************************** ok: [XXX.XXX.XXX.233] TASK [base : Prepare keytab] *************************************************** skipping: [XXX.XXX.XXX.233] TASK [base : Modify permissions of keytab] ************************************* ok: [XXX.XXX.XXX.233] TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/principal.yml for XXX.XXX.XXX.233 TASK [base : Check principal] ************************************************** ok: [XXX.XXX.XXX.233] TASK [base : Add principal] **************************************************** skipping: [XXX.XXX.XXX.233] TASK [base : Check keytab] ***************************************************** ok: [XXX.XXX.XXX.233] TASK [base : Prepare keytab] *************************************************** skipping: [XXX.XXX.XXX.233] TASK [base : Modify permissions of keytab] ************************************* ok: [XXX.XXX.XXX.233] TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/principal.yml for XXX.XXX.XXX.233 TASK [base : Check principal] ************************************************** ok: [XXX.XXX.XXX.233] TASK [base : Add principal] **************************************************** skipping: [XXX.XXX.XXX.233] TASK [base : Check keytab] ***************************************************** ok: [XXX.XXX.XXX.233] TASK [base : Prepare keytab] *************************************************** skipping: [XXX.XXX.XXX.233] TASK [base : Modify permissions of keytab] ************************************* ok: [XXX.XXX.XXX.233] TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/principal.yml for XXX.XXX.XXX.233 TASK [base : Check principal] ************************************************** ok: [XXX.XXX.XXX.233] TASK [base : Add principal] **************************************************** skipping: [XXX.XXX.XXX.233] TASK [base : Check keytab] ***************************************************** ok: [XXX.XXX.XXX.233] TASK [base : Prepare keytab] *************************************************** skipping: [XXX.XXX.XXX.233] TASK [base : Modify permissions of keytab] ************************************* ok: [XXX.XXX.XXX.233] TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/keytab.yml for XXX.XXX.XXX.233 TASK [base : check keytab] ***************************************************** ok: [XXX.XXX.XXX.233] TASK [base : prepare_script] *************************************************** skipping: [XXX.XXX.XXX.233] TASK [base : run_script] ******************************************************* skipping: [XXX.XXX.XXX.233] TASK [base : Modify permissions of keytab] ************************************* ok: [XXX.XXX.XXX.233] TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/keytab.yml for XXX.XXX.XXX.233 TASK [base : check keytab] ***************************************************** ok: [XXX.XXX.XXX.233] TASK [base : prepare_script] *************************************************** skipping: [XXX.XXX.XXX.233] TASK [base : run_script] ******************************************************* skipping: [XXX.XXX.XXX.233] TASK [base : Modify permissions of keytab] ************************************* ok: [XXX.XXX.XXX.233] TASK [base : include] ********************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/base/tasks/keytab.yml for XXX.XXX.XXX.233 TASK [base : check keytab] ***************************************************** ok: [XXX.XXX.XXX.233] TASK [base : prepare_script] *************************************************** skipping: [XXX.XXX.XXX.233] TASK [base : run_script] ******************************************************* skipping: [XXX.XXX.XXX.233] TASK [base : Modify permissions of keytab] ************************************* ok: [XXX.XXX.XXX.233] TASK [java7 : include] ********************************************************* included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/java7/tasks/install.yml for XXX.XXX.XXX.233 TASK [java7 : check_jdk7_installed] ******************************************** ok: [XXX.XXX.XXX.233] [WARNING]: Consider using yum, dnf or zypper module rather than running rpm TASK [java7 : download_oraclejdk7_by_wget] ************************************* skipping: [XXX.XXX.XXX.233] TASK [java7 : md5sum_rpm] ****************************************************** skipping: [XXX.XXX.XXX.233] TASK [java7 : check_md5sum] **************************************************** skipping: [XXX.XXX.XXX.233] TASK [java7 : install_oraclejdk] *********************************************** skipping: [XXX.XXX.XXX.233] TASK [java7 : include] ********************************************************* included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/java7/tasks/config.yml for XXX.XXX.XXX.233 TASK [java7 : copy_bash_profile] *********************************************** ok: [XXX.XXX.XXX.233] TASK [java7 : copy_sudoers_conf_of_JAVA_HOME] ********************************** ok: [XXX.XXX.XXX.233] TASK [slavenode : include] ***************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/slavenode/tasks/install.yml for XXX.XXX.XXX.233 TASK [slavenode : install_slavenode_packages] ********************************** ok: [XXX.XXX.XXX.233] => (item=[u'hadoop-hdfs-datanode', u'hadoop-yarn-nodemanager', u'hadoop-mapreduce']) TASK [slavenode : include] ***************************************************** included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/slavenode/tasks/config.yml for XXX.XXX.XXX.233 TASK [slavenode : create_datanode_data_dir] ************************************ [DEPRECATION WARNING]: Using bare variables is deprecated. Update your playbooks so that the environment value uses the full variable syntax ('{{dfs_datanode_data_dirs}}'). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. changed: [XXX.XXX.XXX.233] => (item=/hadoop/data01/dfs/datadir) changed: [XXX.XXX.XXX.233] => (item=/hadoop/data02/dfs/datadir) ok: [XXX.XXX.XXX.233] => (item=/hadoop/data03/dfs/datadir) changed: [XXX.XXX.XXX.233] => (item=/hadoop/data04/dfs/datadir) changed: [XXX.XXX.XXX.233] => (item=/hadoop/data05/dfs/datadir) changed: [XXX.XXX.XXX.233] => (item=/hadoop/data06/dfs/datadir) changed: [XXX.XXX.XXX.233] => (item=/hadoop/data07/dfs/datadir) changed: [XXX.XXX.XXX.233] => (item=/hadoop/data08/dfs/datadir) changed: [XXX.XXX.XXX.233] => (item=/hadoop/data09/dfs/datadir) changed: [XXX.XXX.XXX.233] => (item=/hadoop/data10/dfs/datadir) TASK [slavenode : fix_init_scripts] ******************************************** changed: [XXX.XXX.XXX.233] => (item={u'path': u'hadoop-hdfs', u'name': u'hadoop-hdfs-datanode'}) changed: [XXX.XXX.XXX.233] => (item={u'path': u'hadoop-yarn', u'name': u'hadoop-yarn-nodemanager'}) TASK [slavenode : create_symbolic_link_to/etc/init.d] ************************** ok: [XXX.XXX.XXX.233] => (item={u'path': u'hadoop-hdfs', u'name': u'hadoop-hdfs-datanode'}) ok: [XXX.XXX.XXX.233] => (item={u'path': u'hadoop-yarn', u'name': u'hadoop-yarn-nodemanager'}) TASK [slavenode : create_holder_directory_for_hadoop_tmp_dir] ****************** ok: [XXX.XXX.XXX.233] TASK [slavenode : create_yarn_pid_dir] ***************************************** ok: [XXX.XXX.XXX.233] TASK [slavenode : create_hdfs_log_dir] ***************************************** changed: [XXX.XXX.XXX.233] TASK [slavenode : create_yarn_log_dir] ***************************************** changed: [XXX.XXX.XXX.233] TASK [slavenode : copy_defaults_file] ****************************************** ok: [XXX.XXX.XXX.233] => (item=hadoop-hdfs-datanode) ok: [XXX.XXX.XXX.233] => (item=hadoop-yarn-nodemanager) TASK [slavenode : ensure_version_link] ***************************************** changed: [XXX.XXX.XXX.233] PLAY RECAP ********************************************************************* XXX.XXX.XXX.233 : ok=50 changed=5 unreachable=0 failed=0
Remove the DataNode from decommission list...
!cp group_vars/hadoop_all_cluster1 {work_dir}/hadoop_all_cluster1_old
Edit hadoop_all_cluster1 and remove it from datanode_decommission_nodes
It has not been decommissioned, skipped.
#!diff -ur {work_dir}/hadoop_all_cluster1_old group_vars/hadoop_all_cluster1
Deliver latest configurations to all nodes...
#!ansible-playbook -CDv {playbook_dir}/conf_base.yml -l {target_group}
#!ansible-playbook {playbook_dir}/conf_base.yml -l {target_group}
OK, I will ask NameNodes to reload node settings...
#!ansible hadoop_namenode -m shell -a 'hdfs dfsadmin -fs hdfs://$(hostname):8020 -refreshNodes' --sudo --sudo-user hdfs -l {target_group}
Start cgroups...
!ansible-playbook {playbook_dir}/install-base.yml -l { host_new_machine }
PLAY [hadoop_all] ************************************************************** TASK [setup] ******************************************************************* ok: [XXX.XXX.XXX.233] TASK [os : include] ************************************************************ included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/os/tasks/limits.yml for XXX.XXX.XXX.233 TASK [os : set_nofile_soft_limit] ********************************************** ok: [XXX.XXX.XXX.233] TASK [os : set_nofile_hard_limit] ********************************************** ok: [XXX.XXX.XXX.233] TASK [os : set_core_soft_limit] ************************************************ ok: [XXX.XXX.XXX.233] TASK [os : set_core_hard_limit] ************************************************ ok: [XXX.XXX.XXX.233] TASK [os : include] ************************************************************ included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/os/tasks/thp.yml for XXX.XXX.XXX.233 TASK [os : set_transparent_hugepage] ******************************************* ok: [XXX.XXX.XXX.233] TASK [os : include] ************************************************************ included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/os/tasks/kernel.yml for XXX.XXX.XXX.233 TASK [os : set_local_port_range] *********************************************** ok: [XXX.XXX.XXX.233] TASK [os : set_somaxconn] ****************************************************** ok: [XXX.XXX.XXX.233] TASK [cgroups : include] ******************************************************* included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/cgroups/tasks/install.yml for XXX.XXX.XXX.233 TASK [cgroups : install_cgroups] *********************************************** ok: [XXX.XXX.XXX.233] => (item=[u'libcgroup', u'libcgroup-devel']) TASK [cgroups : include] ******************************************************* included: /tmp/tmpP0U4b9/hadoop/playbooks/roles/cgroups/tasks/conf.yml for XXX.XXX.XXX.233 TASK [cgroups : create_direcoty_cgroups_scripts] ******************************* ok: [XXX.XXX.XXX.233] TASK [cgroups : copy_cgroups_scripts] ****************************************** ok: [XXX.XXX.XXX.233] => (item=cgroups.sh) TASK [cgroups : copy_cgconfig.conf] ******************************************** ok: [XXX.XXX.XXX.233] TASK [cgroups : check_chkconfig_cgconfig] ************************************** ok: [XXX.XXX.XXX.233] TASK [cgroups : set_on_to_cgconfig_of_chkconfig] ******************************* skipping: [XXX.XXX.XXX.233] TASK [cgroups : started_cgconfig] ********************************************** changed: [XXX.XXX.XXX.233] TASK [cgroups : reboot] ******************************************************** skipping: [XXX.XXX.XXX.233] TASK [cgroups : wait for SSH port down] **************************************** skipping: [XXX.XXX.XXX.233] TASK [cgroups : wait for SSH port up] ****************************************** skipping: [XXX.XXX.XXX.233] TASK [cgroups : started_cgconfig] ********************************************** ok: [XXX.XXX.XXX.233] PLAY RECAP ********************************************************************* XXX.XXX.XXX.233 : ok=20 changed=1 unreachable=0 failed=0
Start the DataNode...
!ansible-playbook {playbook_dir}/start_datanode.yml -l { host_new_machine }
PLAY [hadoop_slavenode] ******************************************************** TASK [setup] ******************************************************************* ok: [XXX.XXX.XXX.233] TASK [start_hadoop-hdfs-datanode] ********************************************** changed: [XXX.XXX.XXX.233] PLAY RECAP ********************************************************************* XXX.XXX.XXX.233 : ok=2 changed=1 unreachable=0 failed=0
OK. Checking the health...
!ansible hadoop_client -s -U hdfs -a 'hdfs dfsadmin -report' -l {target_group}
XXX.XXX.XXX.200 | SUCCESS | rc=0 >>
Configured Capacity: 268707633709056 (244.39 TB)
Present Capacity: 255047025981280 (231.96 TB)
DFS Remaining: 251740821216440 (228.96 TB)
DFS Used: 3306204764840 (3.01 TB)
DFS Used%: 1.30%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 746
-------------------------------------------------
Live datanodes (9):
Name: XXX.XXX.XXX.226:1004 (sn02022001)
Hostname: sn02022001
Decommission Status : Normal
Configured Capacity: 29528238325760 (26.86 TB)
DFS Used: 516681058899 (481.20 GB)
Non DFS Used: 1501183657196 (1.37 TB)
DFS Remaining: 27510373609665 (25.02 TB)
DFS Used%: 1.75%
DFS Remaining%: 93.17%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 8
Last contact: Fri Sep 02 19:24:04 JST 2016
Name: XXX.XXX.XXX.232:1004 (sn02032001)
Hostname: sn02032001
Decommission Status : Normal
Configured Capacity: 29528238325760 (26.86 TB)
DFS Used: 342266323665 (318.76 GB)
Non DFS Used: 1501317738175 (1.37 TB)
DFS Remaining: 27684654263920 (25.18 TB)
DFS Used%: 1.16%
DFS Remaining%: 93.76%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 13
Last contact: Fri Sep 02 19:24:04 JST 2016
Name: XXX.XXX.XXX.233:1004 (sn02031601)
Hostname: sn02031601
Decommission Status : Normal
Configured Capacity: 30512734584832 (27.75 TB)
DFS Used: 21571022848 (20.09 GB)
Non DFS Used: 1550786138112 (1.41 TB)
DFS Remaining: 28940377423872 (26.32 TB)
DFS Used%: 0.07%
DFS Remaining%: 94.85%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Sep 02 19:24:03 JST 2016
Name: XXX.XXX.XXX.234:1004 (sn02031201)
Hostname: sn02031201
Decommission Status : Normal
Configured Capacity: 29528238325760 (26.86 TB)
DFS Used: 540451153248 (503.33 GB)
Non DFS Used: 1501181450926 (1.37 TB)
DFS Remaining: 27486605721586 (25.00 TB)
DFS Used%: 1.83%
DFS Remaining%: 93.09%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 8
Last contact: Fri Sep 02 19:24:03 JST 2016
Name: XXX.XXX.XXX.228:1004 (sn02021201)
Hostname: sn02021201
Decommission Status : Normal
Configured Capacity: 29528238325760 (26.86 TB)
DFS Used: 510509822423 (475.45 GB)
Non DFS Used: 1501046187970 (1.37 TB)
DFS Remaining: 27516682315367 (25.03 TB)
DFS Used%: 1.73%
DFS Remaining%: 93.19%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 6
Last contact: Fri Sep 02 19:24:03 JST 2016
Name: XXX.XXX.XXX.231:1004 (sn02032401)
Hostname: sn02032401
Decommission Status : Normal
Configured Capacity: 31497230843904 (28.65 TB)
DFS Used: 157319332033 (146.52 GB)
Non DFS Used: 1601186356193 (1.46 TB)
DFS Remaining: 29738725155678 (27.05 TB)
DFS Used%: 0.50%
DFS Remaining%: 94.42%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 8
Last contact: Fri Sep 02 19:24:05 JST 2016
Name: XXX.XXX.XXX.236:1004 (sn02030401)
Hostname: sn02030401
Decommission Status : Normal
Configured Capacity: 29528238325760 (26.86 TB)
DFS Used: 146715107328 (136.64 GB)
Non DFS Used: 1501427626954 (1.37 TB)
DFS Remaining: 27880095591478 (25.36 TB)
DFS Used%: 0.50%
DFS Remaining%: 94.42%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 12
Last contact: Fri Sep 02 19:24:03 JST 2016
Name: XXX.XXX.XXX.225:1004 (sn02022401)
Hostname: sn02022401
Decommission Status : Normal
Configured Capacity: 29528238325760 (26.86 TB)
DFS Used: 530144455857 (493.74 GB)
Non DFS Used: 1501045214254 (1.37 TB)
DFS Remaining: 27497048655649 (25.01 TB)
DFS Used%: 1.80%
DFS Remaining%: 93.12%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 6
Last contact: Fri Sep 02 19:24:03 JST 2016
Name: XXX.XXX.XXX.230:1004 (sn02020401)
Hostname: sn02020401
Decommission Status : Normal
Configured Capacity: 29528238325760 (26.86 TB)
DFS Used: 540546488539 (503.42 GB)
Non DFS Used: 1501433357996 (1.37 TB)
DFS Remaining: 27486258479225 (25.00 TB)
DFS Used%: 1.83%
DFS Remaining%: 93.08%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 14
Last contact: Fri Sep 02 19:24:04 JST 2016
OK, It seems that the cluster is HELATHY.
Though
Missing blocks (with replication factor 1)
of the sample output shows746
, it is caused by known issue HDFS-8806. We can ignore the counter.
Additionaly, I would like to check the result of fsck...
!ansible hadoop_client -s -U hdfs -a 'hdfs fsck /' -l {target_group}
XXX.XXX.XXX.200 | SUCCESS | rc=0 >>
FSCK started by hdfs (auth:KERBEROS_SSL) from /XXX.XXX.XXX.200 for path / at Fri Sep 02 19:24:30 JST 2016
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
....................................................................................................
...................................................................Status: HEALTHY
Total size: 1091981477852 B (Total open files size: 830 B)
Total dirs: 1414
Total files: 16167
Total symlinks: 0 (Files currently being written: 11)
Total blocks (validated): 23619 (avg. block size 46233179 B) (Total open file blocks (not validated): 10)
Minimally replicated blocks: 23619 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 3.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 9
Number of racks: 1
FSCK ended at Fri Sep 02 19:24:30 JST 2016 in 303 milliseconds
The filesystem under path '/' is HEALTHYConnecting to namenode via https://cn01070401:50470/fsck?ugi=hdfs&path=%2F
!ansible-playbook {playbook_dir}/start_nodemanager.yml -l { host_new_machine }
PLAY [hadoop_slavenode] ******************************************************** TASK [setup] ******************************************************************* ok: [XXX.XXX.XXX.233] TASK [start_hadoop-yarn-nodemanager] ******************************************* changed: [XXX.XXX.XXX.233] PLAY RECAP ********************************************************************* XXX.XXX.XXX.233 : ok=2 changed=1 unreachable=0 failed=0
Confirm the YARN nodes...
!ansible hadoop_client -s -U yarn -a 'yarn node -list' -l {target_group}
XXX.XXX.XXX.200 | SUCCESS | rc=0 >>
Total Nodes:9
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
sn02030401:45454 RUNNING sn02030401:8044 0
sn02032001:45454 RUNNING sn02032001:8044 0
sn02021201:45454 RUNNING sn02021201:8044 0
sn02022001:45454 RUNNING sn02022001:8044 0
sn02022401:45454 RUNNING sn02022401:8044 0
sn02031601:45454 RUNNING sn02031601:8044 0
sn02020401:45454 RUNNING sn02020401:8044 0
sn02031201:45454 RUNNING sn02031201:8044 0
sn02032401:45454 RUNNING sn02032401:8044 016/09/02 19:24:52 INFO impl.TimelineClientImpl: Timeline service address: https://cn01070403:8190/ws/v1/timeline/
sn02031601 is running, OK.
!ansible-playbook {playbook_dir}/start_hbase_regionserver.yml -l { host_new_machine }
PLAY [hadoop_hbase_regionserver] *********************************************** TASK [setup] ******************************************************************* ok: [XXX.XXX.XXX.233] TASK [start_hbase-regionserver] ************************************************ changed: [XXX.XXX.XXX.233] PLAY RECAP ********************************************************************* XXX.XXX.XXX.233 : ok=2 changed=1 unreachable=0 failed=0
Confirm the Region Servers...
from kazoo.client import KazooClient
zk_stdout = !ansible -l {target_group} -m ping hadoop_zookeeperserver
zk_hosts = [line.split()[0] for line in zk_stdout if "SUCCESS" in line]
zk = KazooClient(hosts='%s:2181' % zk_hosts[0], read_only=True)
zk.start()
(master_result,v) = zk.get("/hbase/master")
zk.stop()
active_master = None
for m in filter(lambda m: m['HBase Master'], machines):
if m['Name'] in master_result:
active_master = m['Service IP']
active_master
'XXX.XXX.XXX.197'
!zk-shell { zk_hosts[0] } --run-once "ls /hbase/rs"
sn02020401,16020,1468567893041 sn02021201,16020,1468567892900 sn02022001,16020,1468567893194 sn02022401,16020,1468567894744 sn02030401,16020,1470634628613 sn02031201,16020,1458102803012 sn02031601,16020,1472811915179 sn02032001,16020,1469169835467 sn02032401,16020,1470720586071
sn02031601 is included in the list of Region Servers, OK.
It's done.
Remove temporary files...
!rm -fr {work_dir}